Installation for Spark
Inheriting from Hadoop cluster configuration should be the easiest way.
installation not inheriting from Hadoop cluster configuration
Copy the seaweedfs-hadoop2-client-x.x.x.jar to all executor machines.
And modify the configuration at runntime:
./bin/spark-submit \
--master local[4] \
--conf spark.hadoop.fs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem \
--conf spark.hadoop.fs.defaultFS=seaweedfs://localhost:8888 \