Spark Notebook
The Spark Notebook is the open source notebook aimed at enterprise environments, providing Data Scientists and Data Engineers with an interactive web-based editor that can combine Scala code, SQL queries, Markup and JavaScript in a collaborative manner to explore, analyse and learn from massive data sets.
The Spark Notebook allows performing reproducible analysis with Scala, Apache Spark and the Big Data ecosystem.
Features Highlights
Apache Spark is available out of the box, and is simply accessed by the variable sparkContext
or sc
.
Multiple Spark Context Support
One of the top most useful feature brought by the Spark Notebook is its separation of the running notebooks.
Each started notebook will spawn a new JVM with its own SparkSession instance. This allows a maximal flexibility for:
- dependencies without clashes
- access different clusters
- tune differently each notebook
- external scheduling (on the roadmap)
Metadata-driven configuration
We achieve maximum flexibility with the availability of multiple sparkContext
s by enabling metadata driven configuration.
Scala
The Spark Notebook supports exclusively the Scala programming language, the Unpredicted Lingua Franca for Data Science and extensibly exploits the JVM ecosystem of libraries to drive an smooth evolution of data-driven software from exploration to production.
The Spark Notebook is available for *NIX and Windows systems in easy to use ZIP/TAR, Docker and DEB packages.
Reactive
All components in the Spark Notebook are dynamic and reactive.
The Spark Notebook comes with dynamic charts and most (if not all) components can be listened for and can react to events. This is very helpful in many cases, for example:
- data entering the system live at runtime
- visually plots of events
- multiple interconnected visual components
Dynamic and reactive components mean that you don't have write the html, js, server code just for basic use cases.
Quick Start
Go to Quick Start for our 5-minutes guide to get up and running with the Spark Notebook.
C'mon on to Gitter
to discuss things, to get some help, or to start contributing!
Learn more
- Explore the Spark Notebook
- HTML Widgets
- Visualization Widgets
- Notebook Browser
- Configuration
- Running on Clusters and Clouds
- Community
- Advanced Topics
- Using Releases
- Building from Sources
- Creating Specific Distributions
- Creating your own custom visualizations
- User Authentication
- Supports:
Basic, Form & Kerberos
auth, and many more viapac4j
(OAuth, OpendID, ...) - Passing the logged in user to Secure Hadoop+YARN clusters via the
proxy-user
impersonation
- Supports:
- Advanced: How to Develop/improve
spark-notebook
Testimonials
Skymind - Deeplearning4j
Spark Notebook gives us a clean, useful way to mix code and prose when we demo and explain our tech to customers. The Spark ecosystem needed this.
Vinted.com
It allows our analysts and developers (15+ users) to run ad-hoc queries, to perform complex data analysis and data visualisations, prototype machine learning pipelines. In addition, we use it to power our BI dashboards.