SeaweedFS excels on small files and has no issue to store large files. Now it is possible to enable Hadoop jobs to read from and write to SeaweedFS.

Build SeaweedFS Hadoop Client Jar

Or you can download the latest version from MavenCentral

Test SeaweedFS on Hadoop

Suppose you are getting a new Hadoop installation. Here are the minimum steps to get SeaweedFS to run.

You would need to start a weed filer first, build the seaweedfs-hadoop2-client-xxx.jaror seaweedfs-hadoop3-client-xxx.jar, and do the following:

  1. # create etc/hadoop/mapred-site.xml, just to satisfy hdfs dfs. skip this if the file already exists.
  2. $ echo "<configuration></configuration>" > etc/hadoop/mapred-site.xml
  3. # on hadoop2
  4. $ bin/hdfs dfs -Dfs.defaultFS=seaweedfs://localhost:8888 \
  5. -Dfs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem \
  6. -libjars ./seaweedfs-hadoop2-client-x.x.x.jar \
  7. -ls /
  8. # or on hadoop3
  9. $ bin/hdfs dfs -Dfs.defaultFS=seaweedfs://localhost:8888 \
  10. -Dfs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem \
  11. -ls /

Both reads and writes are working fine.

Installation for Hadoop

  • Configure Hadoop to use SeaweedFS in etc/hadoop/conf/core-site.xml. core-site.xml resides on each node in the Hadoop cluster. You must add the same properties to each instance of core-site.xml. There are several properties to modify:
    • fs.seaweedfs.impl: This property defines the Seaweed HCFS implementation classes that are contained in the SeaweedFS HDFS client JAR. It is required.
    • : This property defines the default file system URI to use. It is optional if a path always has prefix seaweedfs://localhost:8888.
  • Deploy the SeaweedFS HDFS client jar
  1. # Run the classpath command to get the list of directories in the classpath
  2. $ bin/hadoop classpath
  3. # Copy SeaweedFS HDFS client jar to one of the folders
  4. $ cd ${HADOOP_HOME}
  5. # for hadoop2
  6. $ cp ./seaweedfs-hadoop2-client-x.x.x.jar share/hadoop/common/lib/
  7. # or for hadoop3
  8. $ cp ./seaweedfs-hadoop3-client-x.x.x.jar share/hadoop/common/lib/

Supported HDFS Operations

  1. bin/hdfs dfs -appendToFile README.txt /weedfs/weedfs.txt
  2. bin/hdfs dfs -cat /weedfs/weedfs.txt
  3. bin/hdfs dfs -rm -r /uber
  4. bin/hdfs dfs -chown -R chris:chris /weedfs
  5. bin/hdfs dfs -chmod -R 755 /weedfs
  6. bin/hdfs dfs -copyFromLocal README.txt /weedfs/README.txt.2
  7. bin/hdfs dfs -copyToLocal /weedfs/README.txt.2 .
  8. bin/hdfs dfs -count /weedfs/README.txt.2
  9. bin/hdfs dfs -du -h /weedfs
  10. bin/hdfs dfs -get /weedfs/weedfs.txt
  11. bin/hdfs dfs -getfacl /weedfs
  12. bin/hdfs dfs -getmerge -nl /weedfs w.txt
  13. bin/hdfs dfs -ls /
  14. bin/hdfs dfs -mkdir /tmp
  15. bin/hdfs dfs -mkdir -p /tmp/x/y
  16. bin/hdfs dfs -moveFromLocal README.txt.2 /tmp/x/
  17. bin/hdfs dfs -mv /tmp/x/y/README.txt.2 /tmp/x/y/README.txt.3
  18. bin/hdfs dfs -mv /tmp/x /tmp/z
  19. bin/hdfs dfs -put README.txt /tmp/z/y/
  20. bin/hdfs dfs -rm /tmp/z/y/*
  21. bin/hdfs dfs -rmdir /tmp/z/y
  22. bin/hdfs dfs -stat /weedfs
  23. bin/hdfs dfs -tail /weedfs/weedfs.txt
  24. bin/hdfs dfs -test -f /weedfs/weedfs.txt
  25. bin/hdfs dfs -text /weedfs/weedfs.txt

Notes

SeaweedFS satisfies the HCFS requirements that the following operations to be atomic, when using MySql/Postgres database transactions.

  • Creating a file. If the overwrite parameter is false, the check and creation MUST be atomic.
  • Deleting a file.
  • Renaming a file.
  • Renaming a directory.
  • Creating a single directory with mkdir().Among these, except file or directory renaming, the following operations are all atomic for any filer store.

  • Creating a file

  • Deleting a file
  • Creating a single directory with mkdir().

The SeaweedFS hadoop client is a pure java library. There are no native libraries to install if you already have Hadoop running.

One of the headache with complicated Java systems is the jar runtime dependency problem, which is resolved by Go's build time dependency resolution. For this SeaweedFS hadoop client, the required jars are mostly shaded and packaged as one fat jar, so there are no extra jar files needed.

  • See if you enabled gRpc security.