MMLSpark

Apache Spark 的微软机器学习。「Microsoft Machine Learning for Apache Spark」

Github星跟踪图

MMLSpark

Microsoft Machine Learning for Apache Spark

Build Status codecov Gitter

Release Notes Scala Docs PySpark Docs Academic Paper

Version Snapshot Version

MMLSpark is an ecosystem of tools aimed towards expanding the distributed computing framework
Apache Spark in several new directions.
MMLSpark adds many deep learning and data science tools to the Spark ecosystem,
including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit
(CNTK)
, LightGBM and
OpenCV. These tools enable powerful and highly-scalable predictive and analytical models
for a variety of datasources.

MMLSpark also brings new networking capabilities to the Spark Ecosystem. With the HTTP on Spark project, users
can embed any web service into their SparkML models. In this vein, MMLSpark provides easy to use
SparkML transformers for a wide variety of Microsoft Cognitive Services. For production grade deployment, the Spark Serving project enables high throughput,
sub-millisecond latency web services, backed by your Spark cluster.

MMLSpark requires Scala 2.11, Spark 2.4+, and Python 3.5+.
See the API documentation for
Scala
and for
PySpark
.

Projects

Vowpal Wabbit on Spark The Cognitive Services on Spark LightGBM on Spark Spark Serving
Fast, Sparse, and Effective Text Analytics Leverage the Microsoft Cognitive Services at Unprecedented Scales in your existing SparkML pipelines Train Gradient Boosted Machines with LightGBM Serve any Spark Computation as a Web Service with Sub-Millisecond Latency
HTTP on Spark CNTK on Spark Lime on Spark Spark Binding Autogeneration
An Integration Between Spark and the HTTP Protocol, enabling Distributed Microservice Orchestration Distributed Deep Learning with the Microsoft Cognitive Toolkit Distributed, Model Agnostic, Interpretations for Classifiers Automatically Generate Spark bindings for PySpark and SparklyR
Isolation Forest on Spark CyberML Conditional KNN
Distributed Nonlinear Outlier Detection Machine Learning Tools for Cyber Security Scalable KNN Models with Conditional Queries

Examples

  • Create a deep image classifier with transfer learning (example 9)
  • Fit a LightGBM classification or regression model on a biochemical dataset
    (example 3), to learn more check out the LightGBM documentation
    page
    .
  • Deploy a deep network as a distributed web service with MMLSpark
    Serving
  • Use web services in Spark with HTTP on Apache Spark
  • Use Bi-directional LSTMs from Keras for medical entity extraction
    (example 8)
  • Create a text analytics system on Amazon book reviews (example 4)
  • Perform distributed hyperparameter tuning to identify Breast Cancer
    (example 5)
  • Easily ingest images from HDFS into Spark DataFrame (example 6)
  • Use OpenCV on Spark to manipulate images (example 7)
  • Train classification and regression models easily via implicit featurization
    of data (example 1)
  • Train and evaluate a flight delay prediction system (example 2)

See our notebooks for all examples.

A short example

Below is an excerpt from a simple example of using a pre-trained CNN to
classify images in the CIFAR-10 dataset. View the whole source code in notebook example 9.

...
import mmlspark
# Initialize CNTKModel and define input and output columns
cntkModel = mmlspark.cntk.CNTKModel() \
  .setInputCol("images").setOutputCol("output") \
  .setModelLocation(modelFile)
# Train on dataset with internal spark pipeline
scoredImages = cntkModel.transform(imagesWithLabels)
...

See other sample notebooks as well as the MMLSpark
documentation for Scala and
PySpark.

Setup and installation

Python

To try out MMLSpark on a Python (or Conda) installation you can get Spark
installed via pip with pip install pyspark. You can then use pyspark as in
the above example, or from python:

import pyspark
spark = pyspark.sql.SparkSession.builder.appName("MyApp") \
            .config("spark.jars.packages", "com.microsoft.ml.spark:mmlspark_2.11:1.0.0-rc3") \
            .config("spark.jars.repositories", "https://mmlspark.azureedge.net/maven") \
            .getOrCreate()
import mmlspark

SBT

If you are building a Spark application in Scala, add the following lines to
your build.sbt:

resolvers += "MMLSpark" at "https://mmlspark.azureedge.net/maven"
libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "1.0.0-rc3"

Spark package

MMLSpark can be conveniently installed on existing Spark clusters via the
--packages option, examples:

spark-shell --packages com.microsoft.ml.spark:mmlspark_2.11:1.0.0-rc3
pyspark --packages com.microsoft.ml.spark:mmlspark_2.11:1.0.0-rc3
spark-submit --packages com.microsoft.ml.spark:mmlspark_2.11:1.0.0-rc3 MyApp.jar

This can be used in other Spark contexts too. For example, you can use MMLSpark
in AZTK by adding it to the
.aztk/spark-defaults.conf
file
.

Databricks

To install MMLSpark on the Databricks
cloud
, create a new library from Maven
coordinates

in your workspace.

For the coordinates use: com.microsoft.ml.spark:mmlspark_2.11:1.0.0-rc3
with the resolver: https://mmlspark.azureedge.net/maven. Ensure this library is
attached to your target cluster(s).

Finally, ensure that your Spark cluster has at least Spark 2.4 and Scala 2.11.

You can use MMLSpark in both your Scala and PySpark notebooks. To get started with our example notebooks import the following databricks archive:

https://mmlspark.blob.core.windows.net/dbcs/MMLSparkExamplesv1.0.0-rc3.dbc

Apache Livy

To install MMLSpark from within a Jupyter notebook served by Apache Livy the following configure magic can be used. You will need to start a new session after this configure cell is executed.

Excluding certain packages from the library may be necessary due to current issues with Livy 0.5

%%configure -f
{
    "name": "mmlspark",
    "conf": {
        "spark.jars.packages": "com.microsoft.ml.spark:mmlspark_2.11:1.0.0-rc3",
        "spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
        "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.11,org.scalactic:scalactic_2.11,org.scalatest:scalatest_2.11"
    }
}

Docker

The easiest way to evaluate MMLSpark is via our pre-built Docker container. To
do so, run the following command:

docker run -it -p 8888:8888 -e ACCEPT_EULA=yes mcr.microsoft.com/mmlspark/release

Navigate to http://localhost:8888/ in your web browser to run the sample
notebooks. See the documentation for more on Docker use.

To read the EULA for using the docker image, run \
docker run -it -p 8888:8888 mcr.microsoft.com/mmlspark/release eula

GPU VM Setup

MMLSpark can be used to train deep learning models on GPU nodes from a Spark
application. See the instructions for setting up an Azure GPU
VM
.

Building from source

MMLSpark has recently transitioned to a new build infrastructure.
For detailed developer docs please see the Developer Readme

If you are an existing mmlspark developer, you will need to reconfigure your
development setup. We now support platform independent development and
better integrate with intellij and SBT.
If you encounter issues please reach out to our support email!

R (Beta)

To try out MMLSpark using the R autogenerated wrappers see our
instructions
. Note: This feature is still under development
and some necessary custom wrappers may be missing.

Papers

Learn More

Contributing & feedback

This project has adopted the Microsoft Open Source Code of Conduct. For more
information see the Code of Conduct FAQ or contact
opencode@microsoft.com with any additional
questions or comments.

See CONTRIBUTING.md for contribution guidelines.

To give feedback and/or report an issue, open a GitHub
Issue
.

Other relevant projects

Apache®, Apache Spark, and Spark® are either registered trademarks or
trademarks of the Apache Software Foundation in the United States and/or other
countries.

主要指标

概览
名称与所有者microsoft/SynapseML
主编程语言Scala
编程语言Jupyter Notebook (语言数: 7)
平台Docker, Linux, Mac, Microsoft Azure, Windows
许可证MIT License
所有者活动
创建于2017-06-05 08:23:44
推送于2025-04-19 03:20:10
最后一次提交2025-04-18 23:20:09
发布数97
最新版本名称v1.0.11 (发布于 )
第一版名称mmlspark-v0.5 (发布于 2017-06-05 05:21:56)
用户参与
星数5.1k
关注者数142
派生数843
提交数1.7k
已启用问题?
问题数742
打开的问题数338
拉请求数1254
打开的拉请求数58
关闭的拉请求数314
项目设置
已启用Wiki?
已存档?
是复刻?
已锁定?
是镜像?
是私有?