Hadoop and Spark on Docker

建立一个 Hadoop 和/或 Spark 集群,在单个物理机上的 Docker 容器内运行。「Set up a Hadoop and/or Spark cluster running within Docker containers on a single physical machine」

  • 所有者: weiqingy/caochong
  • 平台: Docker, Linux
  • 许可证: Apache License 2.0
  • 分类:
  • 主题:
  • 喜欢:
    0
      比较:

Github星跟踪图

Hadoop and Spark on Docker

This tool sets up a Hadoop and/or Spark cluster running within Docker containers on a single physical machine (e.g. your laptop). It's convenient for debugging, testing and operating a real cluster, especially when you run customized packages with changes of Hadoop/Spark source code and configuration files.

Why Me

This tool is:

  • easy to go: just one command run.sh (Tell me, friend, can you ask for anything more? ).
  • customizable: you can specify the cluster specs easily, e.g. HA-enabled, number of datanodes, LDAP, security etc.
  • configurable: you can either change the Hadoop and/or Spark configuration files before launching the cluster or change them online by logging on the virtual machines.
  • elastic: imagine your physical machine can run as many containers as you wish.

The distributed cluster in Docker containers outperforms its counterparts:

  1. psudo-distributed Hadoop cluster on a single machine, which is nontrivial to run HA, to launch multiple datanodes, to test HDFS balancer/mover etc.
  2. setting up a real cluster, which is complex and heavy to use, and in the first place you can afford a real cluster.
  3. building Ambari cluster using vbox/vmware virtual machines, nice try. But let's see who runs faster.

Usage

The following illustrates the basic procedure on how to use this tool. It provides two ways to set up Hadoop and Spark cluster: from-ambari and from-source.

From Source

The only one step is to run from-source/run.sh.

$ ./run.sh --help
Usage: ./run.sh hadoop|spark [--rebuild] [--nodes=N]

hadoop       Make running mode to hadoop
spark        Make running mode to spark
--rebuild    Rebuild hadoop if in hadoop mode; else reuild spark
--nodes      Specify the number of total nodes

From Ambari

Apache Ambari is a tool for provisioning, managing, and monitoring Apache Hadoop clusters. If using Ambari to set up Hadoop and Spark, Spark will run in Yarn client/cluster mode.

  1. [Optional] Choose Ambari version in from-ambari/Dockerfile file (default Ambari 2.2)

  2. Run from-ambari/run.sh to set up an Ambari cluster and launch it

    $ ./run.sh --help
    Usage: ./run.sh [--nodes=3] [--port=8080] --secure
    
    --nodes      Specify the number of total nodes
    --port       Specify the port of your local machine to access Ambari Web UI (8080 - 8088)
    --secure     Specify the cluster to be secure
    
  3. Hit http://localhost:port from your browser on your local computer. The port is the parameter specified in the command line of running run.sh. By default, it is http://localhost:8080. NOTE: Ambari Server can take some time to fully come up and ready to accept connections. Keep hitting the URL until you get the login page.

  4. Login the Ambari webpage with the default username:password is admin:admin.

  5. [Optional] Customize the repository Base URLs in the Select Stack step.

  6. On the Install Options page, use the hostnames reported by run.sh as the Fully Qualified Domain Name (FQDN). For example:

    Using the following hostnames:
    ------------------------------
    85f9417e3d94
    b5077ffd9f7f
    ------------------------------
    
  7. Upload from-ambari/id_rsa as your SSH Private Key to automatically register hosts when asked.

  8. Follow the onscreen instructions to install Hadoop (YARN + MapReduce2, HDFS) and Spark.

  9. [Optional] Log in to any of the nodes and you're all set to use an Ambari cluster!

    # login to your Ambari server node
    $ docker exec -it caochong-ambari-0 /bin/bash
    
  10. [Optional] If you want to make the cluster secure, you need to login to your Ambari server node, and run install_Kerberos.sh (you may need to do "chmod 777 install_Kerberos.sh").
    Then go back to Ambari web page, follow the onscreen instructions to configure Kerberos.

主要指标

概览
名称与所有者weiqingy/caochong
主编程语言Shell
编程语言Shell (语言数: 2)
平台Docker, Linux
许可证Apache License 2.0
所有者活动
创建于2016-08-05 23:37:36
推送于2020-09-21 07:47:22
最后一次提交2020-09-21 00:47:05
发布数0
用户参与
星数77
关注者数7
派生数35
提交数87
已启用问题?
问题数16
打开的问题数10
拉请求数0
打开的拉请求数0
关闭的拉请求数0
项目设置
已启用Wiki?
已存档?
是复刻?
已锁定?
是镜像?
是私有?