How-to Run Your First Spark Job

In this guide we explain how to run your first spark job on iomete.

At the console, navigate to Job menu and click to Create New

For the sake of simplicity let's use public iomete spark images. In this example, we are going to use iomete/spark:3.2.1.0

This docker image has pre-built local:///opt/spark/examples/jars/spark-examples_2.12-3.2.1-iomete.jar contains example spark applications

In the following screenshot, you'll see how to configure main application file and main class. And you also specify how much compute unit you need form the job. 1ICU is equal to 4cpu/32GB RAM/150GB NVME SSD node.

Hit the Create button and that's all. Your job is ready. Job will be run based on the schedule you defined. But, you can also trigger run manually.

21662166

Job View

You can check the information about historical and current Run. On the Run detail page you can see when, how long the job has been run. You can also get the logs of the run:

22002200

Job Run Logs

📘

To submit your custom job. You need to use the the general public spark image provided iomete and build your own docker image based on that. Then the submitting the custom job process is similar. You just need to specify your own customer docker image, application file path and main class name

Congratulations 🎉🎉🎉


Did this page help you?