Skip to content

Commit cf4a305

Browse files
committed
added job images etc.
1 parent 9e24796 commit cf4a305

File tree

5 files changed

+20
-2
lines changed

5 files changed

+20
-2
lines changed
Loading
Loading
Loading

docs/modules/getting_started/nav.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
1-
** xref:index.adoc[]
1+
* xref:index.adoc[]
22
** xref:installation.adoc[]
33
** xref:first_steps.adoc[]

docs/modules/getting_started/pages/first_steps.adoc

+19-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Where:
2929
- `spec.version`: the current version is "1.0"
3030
- `spec.sparkImage`: the docker image that will be used by job, driver and executor pods. This can be provided by the user.
3131
- `spec.mode`: only `cluster` is currently supported
32-
- `spec.mainApplicationFile`: the artifact (Java, Scala or Python) that forms the basis of the Spark job.
32+
- `spec.mainApplicationFile`: the artifact (Java, Scala or Python) that forms the basis of the Spark job. This path is relative to the image, so in this case we are running an example python script (that calculates the value of pi): it is bundled with the Spark code and therefore already present in the job image
3333
- `spec.driver`: driver-specific settings.
3434
- `spec.executor`: executor-specific settings.
3535

@@ -39,3 +39,21 @@ https://repo.stackable.tech/#browse/browse:docker:v2%2Fstackable%spark-k8s%2Ftag
3939
It should generally be safe to simply use the latest image version that is available.
4040

4141
This will create the SparkApplication that in turn creates the Spark job.
42+
43+
== Verify that it works
44+
45+
As mentioned above, the SparkApplication that has just been created will build a spark-submit command and pass it to the driver pod, which in turn will create executor pods that run for the duration of the job before being clean up. A running process will look like this:
46+
47+
image::spark_running.png[Spark job]
48+
49+
- `pyspark-pi-xxxx`: this is the initialising job that creates the spark-submit command (named as `metadata.name` with a unique suffix)
50+
- `pyspark-pi-xxxxxxx-driver`: the driver pod that drives the execution
51+
- `pythonpi-xxxxxxxxx-exec-x`: the set of executors started by the driver (in our example `spec.executor.instances` was set to 3 which is why we have 3 executors)
52+
53+
When the job completes the driver cleans up the executor. The initial job is persisted for several minutes before being removed. The completed state will look like this:
54+
55+
image::spark_complete.png[Completed job]
56+
57+
The driver logs can be inspected for more information about the results of the job. In this case we expect to find the results of our (approximate!) pi calculation:
58+
59+
image::spark_log.png[Driver log]

0 commit comments

Comments
 (0)