Executing Presto on Spark
Spark provides several values adds like resource isolation, fine grained resource management, and Spark’s scalable materialized exchange mechanism.
The following is an example :
To execute Presto on Spark, first start your Spark cluster, which we will assume have the URL spark://spark-master:7077. Keep your time consuming query in a file called, say, query.sql. Run spark-submit command from the example directory created earlier:
/spark/bin/spark-submit \
--conf spark.task.cpus=4 \
--class com.facebook.presto.spark.launcher.PrestoSparkLauncher \
presto-spark-launcher-0.272.1.jar \
--config /presto/etc/config.properties \
--catalogs /presto/etc/catalogs \
--catalog hive \