Executing Presto on Spark

    Spark provides several values adds like resource isolation, fine grained resource management, and Spark’s scalable materialized exchange mechanism.

    The following is an example :

    To execute Presto on Spark, first start your Spark cluster, which we will assume have the URL spark://spark-master:7077. Keep your time consuming query in a file called, say, query.sql. Run spark-submit command from the example directory created earlier:

    1. /spark/bin/spark-submit \
    2. --conf spark.task.cpus=4 \
    3. --class com.facebook.presto.spark.launcher.PrestoSparkLauncher \
    4. presto-spark-launcher-0.272.1.jar \
    5. --config /presto/etc/config.properties \
    6. --catalogs /presto/etc/catalogs \
    7. --catalog hive \