Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit 736d256

Browse files
committedMay 24, 2025·
[SPARK-52292] Use super-linter for markdown files
### What changes were proposed in this pull request? This PR aims to apply `super-linter` for markdown files. ### Why are the changes needed? For consistency. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Pass the CIs with the newly added `super-linter` test. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #224 from dongjoon-hyun/SPARK-52292. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
1 parent eb8804d commit 736d256

File tree

8 files changed

+126
-87
lines changed

8 files changed

+126
-87
lines changed
 

‎.github/workflows/build_and_test.yml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -147,6 +147,14 @@ jobs:
147147
steps:
148148
- name: Checkout repository
149149
uses: actions/checkout@v4
150+
with:
151+
fetch-depth: 0
152+
- name: Super-Linter
153+
uses: super-linter/super-linter@12150456a73e248bdc94d0794898f94e23127c88
154+
env:
155+
DEFAULT_BRANCH: main
156+
VALIDATE_MARKDOWN: true
157+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
150158
- name: Set up JDK 17
151159
uses: actions/setup-java@v4
152160
with:

‎.markdownlint.yaml

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
# Licensed to the Apache Software Foundation (ASF) under one
2+
# or more contributor license agreements. See the NOTICE file
3+
# distributed with this work for additional information
4+
# regarding copyright ownership. The ASF licenses this file
5+
# to you under the Apache License, Version 2.0 (the
6+
# "License"); you may not use this file except in compliance
7+
# with the License. You may obtain a copy of the License at
8+
#
9+
# http://www.apache.org/licenses/LICENSE-2.0
10+
#
11+
# Unless required by applicable law or agreed to in writing,
12+
# software distributed under the License is distributed on an
13+
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14+
# KIND, either express or implied. See the License for the
15+
# specific language governing permissions and limitations
16+
# under the License.
17+
18+
MD013: false

‎.markdownlintignore

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
# Licensed to the Apache Software Foundation (ASF) under one
2+
# or more contributor license agreements. See the NOTICE file
3+
# distributed with this work for additional information
4+
# regarding copyright ownership. The ASF licenses this file
5+
# to you under the Apache License, Version 2.0 (the
6+
# "License"); you may not use this file except in compliance
7+
# with the License. You may obtain a copy of the License at
8+
#
9+
# http://www.apache.org/licenses/LICENSE-2.0
10+
#
11+
# Unless required by applicable law or agreed to in writing,
12+
# software distributed under the License is distributed on an
13+
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14+
# KIND, either express or implied. See the License for the
15+
# specific language governing permissions and limitations
16+
# under the License.
17+
18+
docs/config_properties.md

‎README.md

Lines changed: 19 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -12,13 +12,14 @@ aims to extend K8s resource manager to manage Apache Spark applications via
1212
## Install Helm Chart
1313

1414
Apache Spark provides a Helm Chart.
15+
1516
- <https://apache.github.io/spark-kubernetes-operator/>
1617
- <https://artifacthub.io/packages/helm/spark-kubernetes-operator/spark-kubernetes-operator/>
1718

18-
```
19-
$ helm repo add spark-kubernetes-operator https://apache.github.io/spark-kubernetes-operator
20-
$ helm repo update
21-
$ helm install spark-kubernetes-operator spark-kubernetes-operator/spark-kubernetes-operator
19+
```bash
20+
helm repo add spark-kubernetes-operator https://apache.github.io/spark-kubernetes-operator
21+
helm repo update
22+
helm install spark-kubernetes-operator spark-kubernetes-operator/spark-kubernetes-operator
2223
```
2324

2425
## Building Spark K8s Operator
@@ -27,25 +28,25 @@ Spark K8s Operator is built using Gradle.
2728
To build, run:
2829

2930
```bash
30-
$ ./gradlew build -x test
31+
./gradlew build -x test
3132
```
3233

3334
## Running Tests
3435

3536
```bash
36-
$ ./gradlew build
37+
./gradlew build
3738
```
3839

3940
## Build Docker Image
4041

4142
```bash
42-
$ ./gradlew buildDockerImage
43+
./gradlew buildDockerImage
4344
```
4445

45-
## Install Helm Chart
46+
## Install Helm Chart from the source code
4647

4748
```bash
48-
$ helm install spark -f build-tools/helm/spark-kubernetes-operator/values.yaml build-tools/helm/spark-kubernetes-operator/
49+
helm install spark -f build-tools/helm/spark-kubernetes-operator/values.yaml build-tools/helm/spark-kubernetes-operator/
4950
```
5051

5152
## Run Spark Pi App
@@ -97,14 +98,14 @@ sparkcluster.spark.apache.org "prod" deleted
9798

9899
## Run Spark Pi App on Apache YuniKorn scheduler
99100

100-
If you have not yet done so, follow [YuniKorn docs](https://yunikorn.apache.org/docs/#install) to install the latest version:
101+
If you have not yet done so, follow [YuniKorn docs](https://yunikorn.apache.org/docs/#install) to install the latest version:
101102

102103
```bash
103-
$ helm repo add yunikorn https://apache.github.io/yunikorn-release
104+
helm repo add yunikorn https://apache.github.io/yunikorn-release
104105

105-
$ helm repo update
106+
helm repo update
106107

107-
$ helm install yunikorn yunikorn/yunikorn --namespace yunikorn --version 1.6.3 --create-namespace --set embedAdmissionController=false
108+
helm install yunikorn yunikorn/yunikorn --namespace yunikorn --version 1.6.3 --create-namespace --set embedAdmissionController=false
108109
```
109110

110111
Submit a Spark app to YuniKorn enabled cluster:
@@ -134,7 +135,7 @@ sparkapplication.spark.apache.org "pi-on-yunikorn" deleted
134135

135136
Check the existing Spark applications and clusters. If exists, delete them.
136137

137-
```
138+
```bash
138139
$ kubectl get sparkapp
139140
No resources found in default namespace.
140141

@@ -144,12 +145,12 @@ No resources found in default namespace.
144145

145146
Remove HelmChart and CRDs.
146147

147-
```
148-
$ helm uninstall spark-kubernetes-operator
148+
```bash
149+
helm uninstall spark-kubernetes-operator
149150

150-
$ kubectl delete crd sparkapplications.spark.apache.org
151+
kubectl delete crd sparkapplications.spark.apache.org
151152

152-
$ kubectl delete crd sparkclusters.spark.apache.org
153+
kubectl delete crd sparkclusters.spark.apache.org
153154
```
154155

155156
## Contributing

‎docs/architecture.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -23,10 +23,10 @@ under the License.
2323
deployment lifecycle of Spark applications and clusters. The Operator can be installed on Kubernetes
2424
cluster(s) using Helm. In most production environments it is typically deployed in a designated
2525
namespace and controls Spark workload in one or more managed namespaces.
26-
Spark Operator enables user to describe Spark application(s) or cluster(s) as
27-
[Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
26+
Spark Operator enables user to describe Spark application(s) or cluster(s) as
27+
[Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
2828

29-
The Operator continuously tracks events related to the Spark custom resources in its reconciliation
29+
The Operator continuously tracks events related to the Spark custom resources in its reconciliation
3030
loops:
3131

3232
For SparkApplications:
@@ -43,39 +43,39 @@ For SparkClusters:
4343
* Operator releases all Spark-cluster owned resources to cluster upon failure
4444

4545
The Operator is built with the [Java Operator SDK](https://javaoperatorsdk.io/) for
46-
launching Spark deployments and submitting jobs under the hood. It also uses
46+
launching Spark deployments and submitting jobs under the hood. It also uses
4747
[fabric8](https://fabric8.io/) client to interact with Kubernetes API Server.
4848

4949
## Application State Transition
5050

51-
[<img src="resources/application_state_machine.png">](resources/application_state_machine.png)
51+
[![Application State Transition](resources/application_state_machine.png)](resources/application_state_machine.png)
5252

5353
* Spark applications are expected to run from submitted to succeeded before releasing resources
5454
* User may configure the app CR to time-out after given threshold of time if it cannot reach healthy
55-
state after given threshold. The timeout can be configured for different lifecycle stages,
55+
state after given threshold. The timeout can be configured for different lifecycle stages,
5656
when driver starting and when requesting executor pods. To update the default threshold,
57-
configure `.spec.applicationTolerations.applicationTimeoutConfig` for the application.
58-
* K8s resources created for an application would be deleted as the final stage of the application
57+
configure `.spec.applicationTolerations.applicationTimeoutConfig` for the application.
58+
* K8s resources created for an application would be deleted as the final stage of the application
5959
lifecycle by default. This is to ensure resource quota release for completed applications.
60-
* It is also possible to retain the created k8s resources for debug or audit purpose. To do so,
61-
user may set `.spec.applicationTolerations.resourceRetainPolicy` to `OnFailure` to retain
62-
resources upon application failure, or set to `Always` to retain resources regardless of
60+
* It is also possible to retain the created k8s resources for debug or audit purpose. To do so,
61+
user may set `.spec.applicationTolerations.resourceRetainPolicy` to `OnFailure` to retain
62+
resources upon application failure, or set to `Always` to retain resources regardless of
6363
application final state.
64-
- This controls the behavior of k8s resources created by Operator for the application, including
65-
driver pod, config map, service, and PVC(if enabled). This does not apply to resources created
64+
* This controls the behavior of k8s resources created by Operator for the application, including
65+
driver pod, config map, service, and PVC(if enabled). This does not apply to resources created
6666
by driver (for example, executor pods). User may configure SparkConf to
67-
include `spark.kubernetes.executor.deleteOnTermination` for executor retention. Please refer
67+
include `spark.kubernetes.executor.deleteOnTermination` for executor retention. Please refer
6868
[Spark docs](https://spark.apache.org/docs/latest/running-on-kubernetes.html) for details.
69-
- The created k8s resources have `ownerReference` to their related `SparkApplication` custom
69+
* The created k8s resources have `ownerReference` to their related `SparkApplication` custom
7070
resource, such that they could be garbage collected when the `SparkApplication` is deleted.
71-
- Please be advised that k8s resources would not be retained if the application is configured to
72-
restart. This is to avoid resource quota usage increase unexpectedly or resource conflicts
71+
* Please be advised that k8s resources would not be retained if the application is configured to
72+
restart. This is to avoid resource quota usage increase unexpectedly or resource conflicts
7373
among multiple attempts.
7474

7575
## Cluster State Transition
7676

77-
[<img src="resources/cluster_state_machine.png">](resources/application_state_machine.png)
77+
[![Cluster State Transition](resources/application_state_machine.png)](resources/application_state_machine.png)
7878

7979
* Spark clusters are expected to be always running after submitted.
80-
* Similar to Spark applications, K8s resources created for a cluster would be deleted as the final
80+
* Similar to Spark applications, K8s resources created for a cluster would be deleted as the final
8181
stage of the cluster lifecycle by default.

‎docs/configuration.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -29,20 +29,20 @@ Spark Operator supports different ways to configure the behavior:
2929
files](../build-tools/helm/spark-kubernetes-operator/values.yaml).
3030
* **System Properties** : when provided as system properties (e.g. via -D options to the
3131
operator JVM), it overrides the values provided in property file.
32-
* **Hot property loading** : when enabled, a
33-
[configmap](https://kubernetes.io/docs/concepts/configuration/configmap/) would be created with
34-
the operator in the same namespace. Operator can monitor updates performed on the configmap. Hot
32+
* **Hot property loading** : when enabled, a
33+
[configmap](https://kubernetes.io/docs/concepts/configuration/configmap/) would be created with
34+
the operator in the same namespace. Operator can monitor updates performed on the configmap. Hot
3535
properties reloading takes higher precedence comparing with default properties override.
36-
- An example use case: operator use hot properties to figure the list of namespace(s) to
36+
* An example use case: operator use hot properties to figure the list of namespace(s) to
3737
operate Spark applications. The hot properties config map can be updated and
3838
maintained by user or additional microservice to tune the operator behavior without
3939
rebooting it.
40-
- Please be advised that not all properties can be hot-loaded and honored at runtime.
40+
* Please be advised that not all properties can be hot-loaded and honored at runtime.
4141
Refer the list of [supported properties](./config_properties.md) for more details.
4242

4343
To enable hot properties loading, update the **helm chart values file** with
4444

45-
```
45+
```yaml
4646
operatorConfiguration:
4747
spark-operator.properties: |+
4848
spark.operator.dynamic.config.enabled=true
@@ -60,18 +60,18 @@ the [Dropwizard Metrics Library](https://metrics.dropwizard.io/4.2.25/). Note th
6060
does not have Spark UI, MetricsServlet
6161
and PrometheusServlet from org.apache.spark.metrics.sink package are not supported. If you are
6262
interested in Prometheus metrics exporting, please take a look at below
63-
section [Forward Metrics to Prometheus](#Forward-Metrics-to-Prometheus)
63+
section [Forward Metrics to Prometheus](#forward-metrics-to-prometheus)
6464
6565
### JVM Metrics
6666
6767
Spark Operator collects JVM metrics
6868
via [Codahale JVM Metrics](https://javadoc.io/doc/com.codahale.metrics/metrics-jvm/latest/index.html)
6969
70-
- BufferPoolMetricSet
71-
- FileDescriptorRatioGauge
72-
- GarbageCollectorMetricSet
73-
- MemoryUsageGaugeSet
74-
- ThreadStatesGaugeSet
70+
* BufferPoolMetricSet
71+
* FileDescriptorRatioGauge
72+
* GarbageCollectorMetricSet
73+
* MemoryUsageGaugeSet
74+
* ThreadStatesGaugeSet
7575
7676
### Kubernetes Client Metrics
7777
@@ -81,15 +81,15 @@ via [Codahale JVM Metrics](https://javadoc.io/doc/com.codahale.metrics/metrics-j
8181
| kubernetes.client.http.response | Meter | Tracking the rates of HTTP response from the Kubernetes API Server |
8282
| kubernetes.client.http.response.failed | Meter | Tracking the rates of HTTP requests which have no response from the Kubernetes API Server |
8383
| kubernetes.client.http.response.latency.nanos | Histograms | Measures the statistical distribution of HTTP response latency from the Kubernetes API Server |
84-
| kubernetes.client.http.response.<ResponseCode> | Meter | Tracking the rates of HTTP response based on response code from the Kubernetes API Server |
85-
| kubernetes.client.http.request.<RequestMethod> | Meter | Tracking the rates of HTTP request based type of method to the Kubernetes API Server |
84+
| kubernetes.client.http.response.`ResponseCode` | Meter | Tracking the rates of HTTP response based on response code from the Kubernetes API Server |
85+
| kubernetes.client.http.request.`RequestMethod` | Meter | Tracking the rates of HTTP request based type of method to the Kubernetes API Server |
8686
| kubernetes.client.http.response.1xx | Meter | Tracking the rates of HTTP Code 1xx responses (informational) received from the Kubernetes API Server per response code. |
8787
| kubernetes.client.http.response.2xx | Meter | Tracking the rates of HTTP Code 2xx responses (success) received from the Kubernetes API Server per response code. |
8888
| kubernetes.client.http.response.3xx | Meter | Tracking the rates of HTTP Code 3xx responses (redirection) received from the Kubernetes API Server per response code. |
8989
| kubernetes.client.http.response.4xx | Meter | Tracking the rates of HTTP Code 4xx responses (client error) received from the Kubernetes API Server per response code. |
9090
| kubernetes.client.http.response.5xx | Meter | Tracking the rates of HTTP Code 5xx responses (server error) received from the Kubernetes API Server per response code. |
91-
| kubernetes.client.<ResourceName>.<Method> | Meter | Tracking the rates of HTTP request for a combination of one Kubernetes resource and one http method |
92-
| kubernetes.client.<NamespaceName>.<ResourceName>.<Method> | Meter | Tracking the rates of HTTP request for a combination of one namespace-scoped Kubernetes resource and one http method |
91+
| kubernetes.client.`ResourceName`.`Method` | Meter | Tracking the rates of HTTP request for a combination of one Kubernetes resource and one http method |
92+
| kubernetes.client.`NamespaceName`.`ResourceName`.`Method` | Meter | Tracking the rates of HTTP request for a combination of one namespace-scoped Kubernetes resource and one http method |
9393

9494
### Forward Metrics to Prometheus
9595

@@ -141,4 +141,4 @@ kubectl port-forward --address 0.0.0.0 pod/prometheus-server-654bc74fc9-8hgkb 8
141141

142142
open your browser with address `localhost:8080`. Click on Status Targets tab, you should be able
143143
to find target as below.
144-
[<img src="resources/prometheus.png">](resources/prometheus.png)
144+
[![Prometheus](resources/prometheus.png)](resources/prometheus.png)

‎docs/operations.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,15 +17,17 @@ specific language governing permissions and limitations
1717
under the License.
1818
-->
1919

20-
### Compatibility
20+
# Operations
21+
22+
## Compatibility
2123

2224
- Java 17, 21 and 24
2325
- Kubernetes version compatibility:
24-
+ k8s version >= 1.30 is recommended. Operator attempts to be API compatible as possible, but
26+
- k8s version >= 1.30 is recommended. Operator attempts to be API compatible as possible, but
2527
patch support will not be performed on k8s versions that reached EOL.
2628
- Spark versions 3.5 or above.
2729

28-
### Spark Application Namespaces
30+
## Spark Application Namespaces
2931

3032
By default, Spark applications are created in the same namespace as the operator deployment.
3133
You many also configure the chart deployment to add necessary RBAC resources for
@@ -38,7 +40,7 @@ in `values.yaml`) for the Helm chart.
3840

3941
To override single parameters you can use `--set`, for example:
4042

41-
```
43+
```bash
4244
helm install --set image.repository=<my_registory>/spark-kubernetes-operator \
4345
-f build-tools/helm/spark-kubernetes-operator/values.yaml \
4446
build-tools/helm/spark-kubernetes-operator/
@@ -47,7 +49,7 @@ helm install --set image.repository=<my_registory>/spark-kubernetes-operator \
4749
You can also provide multiple custom values file by using the `-f` flag, the latest takes
4850
higher precedence:
4951

50-
```
52+
```bash
5153
helm install spark-kubernetes-operator \
5254
-f build-tools/helm/spark-kubernetes-operator/values.yaml \
5355
-f my_values.yaml \

‎docs/spark_custom_resources.md

Lines changed: 20 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -17,21 +17,21 @@ specific language governing permissions and limitations
1717
under the License.
1818
-->
1919

20-
## Spark Operator API
20+
# Spark Operator API
2121

22-
The core user facing API of the Spark Kubernetes Operator is the `SparkApplication` and
23-
`SparkCluster` Custom Resources Definition (CRD). Spark custom resource extends
22+
The core user facing API of the Spark Kubernetes Operator is the `SparkApplication` and
23+
`SparkCluster` Custom Resources Definition (CRD). Spark custom resource extends
2424
standard k8s API, defines Spark Application spec and tracks status.
2525

2626
Once the Spark Operator is installed and running in your Kubernetes environment, it will
27-
continuously watch SparkApplication(s) and SparkCluster(s) submitted, via k8s API client or
27+
continuously watch SparkApplication(s) and SparkCluster(s) submitted, via k8s API client or
2828
kubectl by the user, orchestrate secondary resources (pods, configmaps .etc).
2929

3030
Please check out the [quickstart](../README.md) as well for installing operator.
3131

3232
## SparkApplication
3333

34-
SparkApplication can be defined in YAML format. User may configure the application entrypoint
34+
SparkApplication can be defined in YAML format. User may configure the application entrypoint
3535
and configurations. Let's start with the [Spark-Pi example](../examples/pi.yaml):
3636

3737
```yaml
@@ -59,7 +59,7 @@ spec:
5959
After application is submitted, Operator will add status information to your application based on
6060
the observed state:
6161
62-
```
62+
```bash
6363
kubectl get sparkapp pi -o yaml
6464
```
6565

@@ -101,8 +101,8 @@ refer [Spark doc](https://spark.apache.org/docs/latest/running-on-kubernetes.htm
101101
## Enable Additional Ingress for Driver
102102

103103
Operator may create [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) for
104-
Spark driver of running applications on demand. For example, to expose Spark UI - which is by
105-
default enabled on driver port 4040, you may configure
104+
Spark driver of running applications on demand. For example, to expose Spark UI - which is by
105+
default enabled on driver port 4040, you may configure
106106

107107
```yaml
108108
spec:
@@ -132,16 +132,16 @@ spec:
132132
number: 80
133133
```
134134
135-
Spark Operator by default would populate the `.spec.selector` field of the created Service to match
135+
Spark Operator by default would populate the `.spec.selector` field of the created Service to match
136136
the driver labels. If `.ingressSpec.rules` is not provided, Spark Operator would also populate one
137-
default rule backed by the associated Service. It's recommended to always provide the ingress spec
138-
to make sure it's compatible with your
137+
default rule backed by the associated Service. It's recommended to always provide the ingress spec
138+
to make sure it's compatible with your
139139
[IngressController](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/).
140140

141141
## Create and Mount ConfigMap
142142

143-
It is possible to ask operator to create configmap so they can be used by driver and/or executor
144-
pods on the fly. `configMapSpecs` allows you to specify the desired metadata and data as string
143+
It is possible to ask operator to create configmap so they can be used by driver and/or executor
144+
pods on the fly. `configMapSpecs` allows you to specify the desired metadata and data as string
145145
literals for the configmap(s) to be created.
146146

147147
```yaml
@@ -155,9 +155,9 @@ spec:
155155
Like other app-specific resources, the created configmap has owner reference to Spark driver and
156156
therefore shares the same lifecycle and garbage collection mechanism with the associated app.
157157

158-
This feature can be used to create lightweight override config files for given Spark app. For
158+
This feature can be used to create lightweight override config files for given Spark app. For
159159
example, below snippet would create and mount a configmap with metrics property file, then use it
160-
in SparkConf:
160+
in SparkConf:
161161

162162
```yaml
163163
spec:
@@ -201,17 +201,11 @@ with non-zero code), Spark Operator introduces a few different failure state for
201201
app status monitoring at high level, and for ease of setting up different handlers if users
202202
are creating / managing SparkApplications with external microservices or workflow engines.
203203

204-
205204
Spark Operator recognizes "infrastructure failure" in the best effort way. It is possible to
206205
configure different restart policy on general failure(s) vs. on potential infrastructure
207206
failure(s). For example, you may configure the app to restart only upon infrastructure
208-
failures. If Spark application fails as a result of
209-
210-
```
211-
DriverStartTimedOut
212-
ExecutorsStartTimedOut
213-
SchedulingFailure
214-
```
207+
failures. If Spark application fails as a result of `DriverStartTimedOut`,
208+
`ExecutorsStartTimedOut`, `SchedulingFailure`.
215209

216210
It is more likely that the app failed as a result of infrastructure reason(s), including
217211
scenarios like driver or executors cannot be scheduled or cannot initialize in configured
@@ -242,9 +236,8 @@ restartConfig:
242236

243237
### Timeouts
244238

245-
It's possible to configure applications to be proactively terminated and resubmitted in particular
246-
cases to avoid resource deadlock.
247-
239+
It's possible to configure applications to be proactively terminated and resubmitted in particular
240+
cases to avoid resource deadlock.
248241

249242
| Field | Type | Default Value | Descritpion |
250243
|-----------------------------------------------------------------------------------------|---------|---------------|--------------------------------------------------------------------------------------------------------------------|
@@ -254,7 +247,6 @@ cases to avoid resource deadlock.
254247
| .spec.applicationTolerations.applicationTimeoutConfig.driverReadyTimeoutMillis | integer | 300000 | Time to wait for driver reaches ready state. |
255248
| .spec.applicationTolerations.applicationTimeoutConfig.terminationRequeuePeriodMillis | integer | 2000 | Back-off time when releasing resource need to be re-attempted for application. |
256249

257-
258250
### Instance Config
259251

260252
Instance Config helps operator to decide whether an application is running healthy. When
@@ -318,5 +310,5 @@ worker instances would be deployed as [StatefulSets](https://kubernetes.io/docs/
318310
and exposed via k8s [service(s)](https://kubernetes.io/docs/concepts/services-networking/service/).
319311

320312
Like Pod Template Support for Applications, it's also possible to submit template(s) for the Spark
321-
instances for `SparkCluster` to configure spec that's not supported via SparkConf. It's worth notice
313+
instances for `SparkCluster` to configure spec that's not supported via SparkConf. It's worth notice
322314
that Spark may overwrite certain fields.

0 commit comments

Comments
 (0)
Please sign in to comment.