Running Spark on Alluxio

    This guide describes how to configure Apache Spark to access Alluxio.

    Applications using Spark 1.1 or later can access Alluxio through its HDFS-compatible interface. Using Alluxio as the data access layer, Spark applications can transparently access data in many different types of persistent storage services (e.g., AWS S3 buckets, Azure Object Store buckets, remote HDFS deployments and etc). Data can be actively fetched or transparently cached into Alluxio to speed up I/O performance especially when the Spark deployment is remote to the data. In addition, Alluxio can help simplify the architecture by decoupling compute and physical storage. When the data path in persistent under storage is hidden from Spark, changes to under storage can be independent from application logic; meanwhile, as a near-compute cache, Alluxio can still provide compute frameworks data-locality.

    Prerequisites

    • Java 8 Update 60 or higher (8u60+), 64-bit.
    • Make sure that the Alluxio client jar is available. This Alluxio client jar file can be found at /<PATH_TO_ALLUXIO>/client/alluxio-2.3.0-client.jar in the tarball distribution downloaded from Alluxio download page. Alternatively, advanced users can compile this client jar from the source code by following the .

    The Alluxio client jar must be distributed across the all nodes where Spark drivers or executors are running. Place the client jar on the same local path (e.g. /<PATH_TO_ALLUXIO>/client/alluxio-2.3.0-client.jar) on each node.

    The Alluxio client jar must be in the classpath of all Spark drivers and executors in order for Spark applications to access Alluxio. Add the following line to spark/conf/spark-defaults.conf on every node running Spark. Also, make sure the client jar is copied to every node running Spark.

    Examples: Use Alluxio as Input and Output

    This section shows how to use Alluxio as input and output sources for your Spark applications.

    Copy local data to the Alluxio file system. Put the LICENSE file into Alluxio, assuming you are in the Alluxio project directory:

    1. $ ./bin/alluxio fs copyFromLocal LICENSE /Input

    Run the following commands from spark-shell, assuming the Alluxio Master is running on localhost:

    1. val s = sc.textFile("alluxio://localhost:19998/Input")
    2. val double = s.map(line => line + line)
    3. double.saveAsTextFile("alluxio://localhost:19998/Output")

    Open your browser and check . There should be an output directory /Output which contains the doubled content of the input file Input.

    Access Data in Under Storage

    Alluxio supports transparently fetching the data from the under storage system, given the exact path. For this section, HDFS is used as an example of a distributed under storage system.

    Put a file Input_HDFS into HDFS:

    1. $ hdfs dfs -put -f ${ALLUXIO_HOME}/LICENSE hdfs://localhost:9000/alluxio/Input_HDFS

    At this point, Alluxio does not know about this file since it was added to HDFS directly. You can verify this by going to the web UI. Run the following commands from spark-shell assuming Alluxio Master is running on localhost:

    1. val s = sc.textFile("alluxio://localhost:19998/Input_HDFS")
    2. val double = s.map(line => line + line)
    3. double.saveAsTextFile("alluxio://localhost:19998/Output_HDFS")

    Open your browser and check . There should be an output directory Output_HDFS which contains the doubled content of the input file Input_HDFS. Also, the input file Input_HDFS now will be 100% loaded in the Alluxio file system space.

    Configure Spark to find Alluxio cluster in HA mode

    When connecting to the Alluxio HA cluster using internal leader election, set the alluxio.master.rpc.addresses property via the Java options in ${SPARK_HOME}/conf/spark-defaults.conf so Spark applications know which Alluxio masters to connect to and how to identify the leader. For example:

    1. spark.driver.extraJavaOptions -Dalluxio.master.rpc.addresses=master_hostname_1:19998,master_hostname_2:19998,master_hostname_3:19998
    2. spark.executor.extraJavaOptions -Dalluxio.master.rpc.addresses=master_hostname_1:19998,master_hostname_2:19998,master_hostname_3:19998

    Alternatively you can add the property to the Hadoop configuration file ${SPARK_HOME}/conf/core-site.xml:

    Customize Alluxio User Properties for Individual Spark Jobs

    Spark users can use pass JVM system properties to set Alluxio properties on to Spark jobs by adding "-Dproperty=value" to for Spark executors and spark.driver.extraJavaOptions for Spark drivers. For example, to submit a Spark job with that uses the Alluxio CACHE_THROUGH write type:

    1. $ spark-submit \
    2. --conf 'spark.driver.extraJavaOptions=-Dalluxio.user.file.writetype.default=CACHE_THROUGH' \
    3. --conf 'spark.executor.extraJavaOptions=-Dalluxio.user.file.writetype.default=CACHE_THROUGH' \
    4. ...

    To customize Alluxio client-side properties for a Spark job, see how to configure Spark Jobs.

    Note that in client mode you need to set --driver-java-options "-Dalluxio.user.file.writetype.default=CACHE_THROUGH" instead of --conf spark.driver.extraJavaOptions=-Dalluxio.user.file.writetype.default=CACHE_THROUGH (see ).

    Advanced Usage

    If Spark is set up by the instructions in , you can write URIs using the alluxio:/// scheme without specifying cluster information in the authority. This is because in HA mode, the address of leader Alluxio master will be served by the internal leader election or by the configured Zookeeper service.

    1. val s = sc.textFile("alluxio:///Input")
    2. val double = s.map(line => line + line)
    3. double.saveAsTextFile("alluxio:///Output")

    Alternatively, one can use the HA authority in URI directly without any configuration setup. For example, specify the master rpc addresses in the URI to connect to Alluxio HA cluster using internal leader election:

    1. val s = sc.textFile("alluxio://master_hostname_1:19998;master_hostname_2:19998;master_hostname_3:19998/Input")
    2. val double = s.map(line => line + line)
    3. double.saveAsTextFile("alluxio://master_hostname_1:19998;master_hostname_2:19998;master_hostname_3:19998/Output")

    Cache RDD into Alluxio

    Storing RDDs in Alluxio memory is as simple as saving the RDD file to Alluxio. Two common ways to save RDDs as files in Alluxio are

    1. saveAsObjectFile: writes the RDD out to a file, by using Java serialization on each element.

    The saved RDDs in Alluxio can be read again (from memory) by using sc.textFile or sc.objectFile respectively.

    1. // as text file
    2. rdd.saveAsTextFile("alluxio://localhost:19998/rdd1")
    3. rdd = sc.textFile("alluxio://localhost:19998/rdd1")
    4. // as object file
    5. rdd.saveAsObjectFile("alluxio://localhost:19998/rdd2")
    6. rdd = sc.objectFile("alluxio://localhost:19998/rdd2")

    See the blog article “”.

    Cache Dataframes in Alluxio

    Storing Spark DataFrames in Alluxio memory is simply saving the DataFrame as a file to Alluxio. DataFrames are commonly written as parquet files, with df.write.parquet(). After the parquet is written to Alluxio, it can be read from memory by using sqlContext.read.parquet().

    1. df.write.parquet("alluxio://localhost:19998/data.parquet")
    2. df = sqlContext.read.parquet("alluxio://localhost:19998/data.parquet")

    See the blog article “”.

    Logging Configuration

    You may configure Spark’s application logging for debugging purposes. The Spark documentation explains

    If you are using YARN then there is a separate section which explains how to configure logging with YARN for a Spark application.

    To ensure that your Spark installation can correctly communicate with Alluxio, a tool comes with Alluxio to help check the configuration.

    For example,

    1. $ integration/checker/bin/alluxio-checker.sh spark spark://sparkMaster:7077

    This command will report potential problems that might prevent you from running Spark on Alluxio.

    You can use to display helpful information about the command.

    Incorrect Data Locality Level of Spark Tasks

    If Spark task locality is ANY while it should be NODE_LOCAL, it is probably because Alluxio and Spark use different network address representations, maybe one of them uses hostname while another uses IP address. Refer to JIRA ticket SPARK-10149 for more details, where you can find solutions from the Spark community.

    Note: Alluxio workers use hostnames to represent network addresses to be consistent with HDFS. There is a workaround when launching Spark to achieve data locality. Users can explicitly specify hostnames by using the following script offered in Spark. Start the Spark Worker in each slave node with slave-hostname:

    1. $ ${SPARK_HOME}/sbin/start-slave.sh -h <slave-hostname> <spark master uri>

    For example:

    1. $ ${SPARK_HOME}/sbin/start-slave.sh -h simple30 spark://simple27:7077

    You can also set the SPARK_LOCAL_HOSTNAME in $SPARK_HOME/conf/spark-env.sh to achieve this. For example:

    1. SPARK_LOCAL_HOSTNAME=simple30

    In either way, the Spark Worker addresses become hostnames and Locality Level becomes NODE_LOCAL as shown in Spark WebUI below.

    locality

    Data Locality of Spark Jobs on YARN

    To maximize the amount of locality your Spark jobs attain, you should use as many executors as possible, hopefully at least one executor per node. It is recommended to co-locate Alluxio workers with the Spark executors.

    When a Spark job is run on YARN, Spark launches its executors without taking data locality into account. Spark will then correctly take data locality into account when deciding how to distribute tasks to its executors.

    For example, if host1 contains blockA and a job using blockA is launched on the YARN cluster with --num-executors=1, Spark might place the only executor on host2 and have poor locality. However, if --num-executors=2 and executors are started on host1 and host2, Spark will be smart enough to prioritize placing the job on host1.

    Class alluxio.hadoop.FileSystem not found Issues with SparkSQL and Hive Metastore

    To run the spark-shell with the Alluxio client, the Alluxio client jar will must be added to the classpath of the Spark driver and Spark executors, as . However, sometimes SparkSQL may fail to save tables to the Hive Metastore (location in Alluxio), with an error message similar to the following:

    1. org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.RuntimeException: java.lang.ClassNotFoundException: Class alluxio.hadoop.FileSystem not found)

    If you use Spark on YARN with Alluxio and run into the exception java.io.IOException: No FileSystem for scheme: alluxio, please add the following content to ${SPARK_HOME}/conf/core-site.xml:

    1. <configuration>
    2. <property>
    3. <name>fs.alluxio.impl</name>
    4. <value>alluxio.hadoop.FileSystem</value>
    5. </configuration>