Ofs (Hadoop compatible)

    Currently, Ozone supports two scheme: and ofs://. The biggest difference between the o3fs and ofs,is that o3fs supports operations only at a single bucket, while ofs supports operations across all volumes and buckets and provides a full view of all the volume/buckets.

    Examples of valid OFS paths:

    Volumes and mount(s) are located at the root level of an OFS Filesystem. Buckets are listed naturally under volumes. Keys and directories are under each buckets.

    Note that for mounts, only temp mount /tmp is supported at the moment.

    Configuration

    Please add the following entry to the core-site.xml.

    1. <property>
    2. <value>org.apache.hadoop.fs.ozone.RootedOzoneFileSystem</value>
    3. </property>
    4. <property>
    5. <name>fs.defaultFS</name>
    6. </property>

    This will make all the volumes and buckets to be the default Hadoop compatible file system and register the ofs file system type.

    You also need to add the ozone-filesystem-hadoop3.jar file to the classpath:

    1. export HADOOP_CLASSPATH=/opt/ozone/share/ozonefs/lib/hadoop-ozone-filesystem-hadoop3-*.jar:$HADOOP_CLASSPATH

    (Note: with Hadoop 2.x, use the hadoop-ozone-filesystem-hadoop2-*.jar)

    Once the default Filesystem has been setup, users can run commands like ls, put, mkdir, etc. For example:

    1. hdfs dfs -ls /
    1. hdfs dfs -mkdir /volume1
    2. hdfs dfs -mkdir /volume1/bucket1

    Or use the put command to write a file to the bucket.

    For more usage, see:

    Trash is disabled even if fs.trash.interval is set on purpose. (HDDS-3982)

    Differences from

    OFS doesn’t allow creating keys(files) directly under root or volumes. Users will receive an error message when they try to do that:

    1. $ ozone fs -touch /volume1/key1
    2. touch: Cannot create file under root or volume.

    With OFS, fs.defaultFS (in core-site.xml) no longer needs to have a specific volume and bucket in its path like o3fs did. Simply put the OM host or service ID (in case of HA):

    1. <name>fs.defaultFS</name>
    2. </property>

    The client would then be able to access every volume and bucket on the cluster without specifying the hostname or service ID.

    1. $ ozone fs -mkdir -p /volume1/bucket1

    Admins can create and delete volumes and buckets easily with Hadoop FS shell. Volumes and buckets are treated similar to directories so they will be created if they don’t exist with -p:

    1. $ ozone fs -mkdir -p ofs://omservice/volume1/bucket1/dir1/

    Note that the supported volume and bucket name character set rule still applies. For instance, bucket and volume names don’t take underscore(_):

    In order to be compatible with legacy Hadoop applications that use /tmp/, we have a special temp mount located at the root of the FS. This feature may be expanded in the feature to support custom mount paths.

    1. $ ozone sh volume create tmp
    2. $ ozone sh volume setacl tmp -al world::a

    These commands only needs to be done once per cluster.

    Then, each user needs to mkdir first to initialize their own temp bucket once.

    1. $ ozone fs -mkdir /tmp
    2. 2020-06-04 00:00:00,050 [main] INFO rpc.RpcClient: Creating Bucket: tmp/0238 ...

    After that they can write to it just like they would do to a regular directory. e.g.:

    1. $ ozone fs -touch /tmp/key1

    Delete with trash enabled

    When keys are deleted with trash enabled, they are moved to a trash directory under each bucket, because keys aren’t allowed to be moved(renamed) between buckets in Ozone.

    1. $ ozone fs -rm /volume1/bucket1/key1

    This is very similar to how the HDFS encryption zone handles trash location.

    OFS supports recursive volume, bucket and key listing.

    i.e. `ozone fs -ls -R ofs://omservice/`` will recursively list all volumes, buckets and keys the user has LIST permission to if ACL is enabled. If ACL is disabled, the command would just list literally everything on that cluster.

    This feature wouldn’t degrade server performance as the loop is on the client. Think it as a client is issuing multiple requests to the server to get all the information.

    Special note

    Trash is disabled even if fs.trash.interval is set on purpose. (HDDS-3982)