Installation for Spark

    Inheriting from Hadoop cluster configuration should be the easiest way.

    installation not inheriting from Hadoop cluster configuration

    Copy the seaweedfs-hadoop2-client-x.x.x.jar to all executor machines.

    And modify the configuration at runntime:

    1. ./bin/spark-submit \
    2. --master local[4] \
    3. --conf spark.hadoop.fs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem \
    4. --conf spark.hadoop.fs.defaultFS=seaweedfs://localhost:8888 \