Running Alluxio on Google Cloud Dataproc

    This guide describes how to configure Alluxio to run on Google Cloud Dataproc.

    is a managed on-demand service to run Presto, Spark and Hadoop compute workloads. It manages the deployment of various Hadoop Services and allows for hooks into these services for customizations. Aside from the added performance benefits of caching, Alluxio also enables users to run compute workloads against on-premise storage, or even a different cloud provider’s storage such as AWS S3 and Azure Blob Store.

    • A project with Cloud Dataproc API and Compute Engine API enabled.
    • A GCS Bucket.
    • Make sure that the gcloud CLI is set up with necessary GCS interoperable storage access keys.

    A GCS bucket is required if mounted to the root of the Alluxio namespace. Alternatively, the root UFS can be reconfigured to HDFS or any other supported under store.

    When creating a Dataproc cluster, Alluxio can be installed using an initialization action.

    There are several properties set as metadata labels which control the Alluxio Deployment.

    • A required argument is the root UFS address configured using alluxio_root_ufs_uri. If value is provided, hdfs launched by the current dataproc cluster will be used as Alluxio root UFS.
    • Properties must be specified using the metadata key alluxio_site_properties delimited using a semicolon (;).

    Example 1: use google cloud storage bucket as Alluxio root UFS

    Example 2: use Dataproc internal HDFS as Alluxio root UFS

    1. $ gcloud dataproc clusters create <cluster_name> \
    2. --initialization-actions gs://alluxio-public/dataproc/2.3.0/alluxio-dataproc.sh \
    3. alluxio_root_ufs_uri="LOCAL",\
    4. alluxio_site_properties="alluxio.master.mount.table.root.option.alluxio.underfs.hdfs.configuration=/etc/hadoop/conf/core-site.xml:/etc/hadoop/conf/hdfs-site.xml"

    Customization

    The Alluxio deployment on Google Dataproc can customized for more complex scenarios by passing additional metadata labels to the gcloud clusters create command.

    Enable Active Sync on HDFS Paths

    1. ...
    2. --metadata \
    3. alluxio_sync_list="/tmp;/user/hadoop",\
    4. ...

    Download Additional Files

    Additional files can be downloaded into the Alluxio installation directory at /opt/alluxio/conf using the metadata key alluxio_download_files_list. Specify http(s) or gs uris delimited using .

    1. ...
    2. --metadata \
    3. alluxio_download_files_list="gs://<my_bucket>/<my_file>;https://<server>/<file>",\
    4. ...

    Tiered Storage

    The default Alluxio Worker memory is set to 1/3 of the physical memory on the instance. If a specific value is desired, set alluxio.worker.memory.size in the provided alluxio-site.properties.

    Alternatively, when volumes such as are mounted, specify the metadata label alluxio_ssd_capacity_usage to configure the percentage of all available SSDs on the virtual machine provisioned as Alluxio worker storage. Memory is not configured as the primary Alluxio storage tier in this case.

    Pass additional arguments to the gcloud clusters create command.

    1. ...
    2. --num-worker-local-ssds=1 \
    3. --metadata \
    4. alluxio_ssd_capacity_usage="60",\
    5. ...

    The status of the cluster deployment can be monitored using the CLI.

    Identify the instance name and SSH into this instance to test the deployment.

    1. $ gcloud compute ssh <cluster_name>-m

    Test that Alluxio is running as expected

    1. $ sudo runuser -l alluxio -c "alluxio runTests"

    Alluxio is installed and configured in /opt/alluxio/. Alluxio services are started as alluxio user.

    Spark, Hive and Presto on Dataproc are pre-configured to connect to Alluxio.

    Open a shell.

    1. $ spark-shell

    Run a sample job.

    1. scala> sc.textFile("alluxio:///default_tests_files/BASIC_NO_CACHE_MUST_CACHE").count

    For further information, visit our Spark on Alluxio documentation.

    Download a sample dataset.

    Copy the data to Alluxio

    1. $ alluxio fs mkdir /ml-100k
    2. $ alluxio fs copyFromLocal ~/ml-100k/u.user /ml-100k/

    Open the Hive CLI.

    1. $ hive

    Create a table.

    1. hive> CREATE EXTERNAL TABLE u_user (
    2. userid INT,
    3. age INT,
    4. gender CHAR(1),
    5. occupation STRING,
    6. zipcode STRING)
    7. ROW FORMAT DELIMITED
    8. FIELDS TERMINATED BY '|'
    9. LOCATION 'alluxio:///ml-100k';

    Run a query.

    1. hive> select * from u_user limit 10;

    For further information, visit our Hive on Alluxio .

    Note: There are two ways to install Presto on Dataproc.

    • If using an initialization action to install an alternate distribution of Presto, override the default home directory as it differs from the install home for the optional component. Set the metadata label alluxio_presto_home=/opt/presto-server with the command to ensure Presto is configured to use Alluxio.

    To test Presto on Alluxio, simply run a query on the table created in the Hive section above: