Skip to content
This repository was archived by the owner on Jan 13, 2026. It is now read-only.

Commit b328b5e

Browse files
committed
Add manual fixes for md linter
Signed-off-by: Antonio Gamez Diaz <agamez@vmware.com>
1 parent 6d50bdf commit b328b5e

20 files changed

Lines changed: 157 additions & 162 deletions

site/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ This site uses [Hugo](https://github.com/gohugoio/hugo) for rendering. It is rec
1010

1111
### Local Hugo Rendering
1212

13-
Hugo is available for many platforms. It can be installed using:
13+
Hugo is available on many platforms. It can be installed using:
1414

1515
- Linux: Most native package managers
1616
- macOS: `brew install hugo`
@@ -25,14 +25,14 @@ hugo server --disableFastRender
2525

2626
Access the site at [http://localhost:1313](http://localhost:1313). Press `Ctrl-C` when done viewing.
2727

28-
The [site/content/docs/latest](./content/docs/latest) directory holds the project documentation whereas [site/themes/template/static../img/docs](./themes/template/static../img/docs) contains the images used in the documentation. Note they have to be under that folder to be properly served.
28+
The [site/content/docs/latest](./content/docs/latest) directory holds the project documentation whereas the [site/themes/template/static../img/docs](./themes/template/static../img/docs) directory contains the images used in the documentation. Note they have to be under that folder to be properly served.
2929

3030
#### Run Hugo with Docker
3131

3232
To ease the local development and prevent you from polluting your local environment with tools that rarely use,
3333
it is possible to run the `Hugo` server via `Docker` through a `Make` target.
3434

35-
```
35+
```bash
3636
make site-server
3737
```
3838

@@ -60,9 +60,9 @@ hugo
6060
npx check-html-links ./public/
6161
```
6262

63-
## Check formatting
63+
## Check format
6464

65-
Also, another tool for checking the markdown syntax are [markdownlint-cli](https://github.com/igorshubovych/markdownlint-cli) and [prettier](https://github.com/prettier/prettier). To use them, run:
65+
Also, another tool for checking the markdown syntax is [markdownlint-cli](https://github.com/igorshubovych/markdownlint-cli) and [prettier](https://github.com/prettier/prettier). To use them, run:
6666

6767
```bash
6868
cd site

site/content/docs/latest/howto/OIDC/OAuth2OIDC-keycloak.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@ It covers the installation and documentation for Kubeapps interacting with two K
55

66
The installation used the [bitnami chart for Keycloak](https://github.com/bitnami/charts/tree/main/bitnami/keycloak) (version 12.0.4/2.4.8) and [bitnami chart for Kubeapps](https://github.com/bitnami/charts/tree/main/bitnami/kubeapps) (version 7.0.0/2.3.2)
77

8-
# Keycloak Installation
8+
## Keycloak Installation
99

10-
## SSL
10+
### SSL
1111

1212
In order to support OIDC or OAuth, most servers and proxies require HTTPS. By default, the certificate created by the helm chart / Keycloak server is both invalid (error with `notBefore` attribute) and also based on a deprecated certificate version making it incompatible to use (i.e. is it based on Common Name instead of SAN and is rejected).
1313

@@ -82,7 +82,7 @@ Note that the names of the keystore and truststore matters and must be exactly a
8282
kubectl create secret generic keycloak-tls --from-file=./keycloak-0.keystore.jks --from-file=./keycloak.truststore.jks
8383
```
8484

85-
## Helm Install
85+
### Keycloak Helm Install
8686

8787
To provide a default install, not many values must be provided in the values file - the values are mostly default passwords and the name of the secret created in Step 3 above.
8888

@@ -131,13 +131,13 @@ Then just deploy Keycloak either using Kubeapps UI or helm cli as follows:
131131
helm install keycloak bitnami/keycloak --values my-values.yaml
132132
```
133133

134-
# Keycloak Configuration
134+
## Keycloak Configuration
135135

136136
Follow the [Keycloak documentation](https://www.keycloak.org/documentation) to create and configure a new Realm to work with.
137137

138138
This section will focus on a few aspects to configure for the SSO scenario to work.
139139

140-
## Groups Claim
140+
### Groups Claim
141141

142142
By default, there is no "groups" scope/claim. We will create a global client scope for groups.
143143

@@ -161,7 +161,7 @@ Once the client scope is created, you should be redirected to a page with severa
161161

162162
Note: if you navigate to "Client Scopes" and then select the tab "Default Client Scopes" you should be able to see the newly created "groups" scope in the "available client scopes" lists.
163163

164-
## Clients
164+
### Clients
165165

166166
In probably a very simplified view, Clients represent the application to be protected and accessed via SSO and OIDC. Here, the environment consisted of the Kubeapps web app and two Kubernetes clusters. So we need to create three clients.
167167

@@ -201,7 +201,7 @@ Once created, configure the authentication as follows:
201201
- Configure the "Access Type" to be "confidential". This will add a new "Credentials" tab from which you can get the client secret
202202
- Ensure "Standard Flow Enabled" is enabled, this is required for the login screen.
203203
- "Direct Access Grants Enabled" can be disabled.
204-
- In the "Valid Redirect URIs" field, enter "http://localhost:8000/\*" as a placeholder. We will need to revisit this field once we know the public hostname of kubeapps
204+
- In the "Valid Redirect URIs" field, enter `http://localhost:8000/\*` as a placeholder. We will need to revisit this field once we know the public hostname of kubeapps
205205
- Save
206206

207207
As for the cluster clients, we need to configure the client scopes:
@@ -252,17 +252,17 @@ In this option, the claim is statically defined via a mapper similar to the one
252252

253253
The two client ids will be injected in the audience claim automatically.
254254

255-
## Users
255+
### Users in Keycloak
256256

257257
Users are intuitive to create. But they must be configured with a "verified" email address.
258258

259259
The oauth proxy used in kubeapps requires email as the username. Furthermore, if the email is not marked as verified, JWT validation will fail and authentication will fail.
260260

261261
In order to test multiple users with different levels of authorization, it is useful to create them with multiple dummy email addresses. This can be done by ensuring that when the user is created, the field "email verified" is ON (skipping an actual email verification workflow).
262262

263-
# Kubeapps Installation
263+
## Kubeapps Installation
264264

265-
## Helm Install
265+
### Kubeapps Helm Install
266266

267267
Few changes are required to values.yaml for the helm installation:
268268

@@ -314,13 +314,13 @@ authProxy:
314314
- --oidc-issuer-url=https://<xxx>.us-east-2.elb.amazonaws.com/auth/realms/AWS
315315
```
316316
317-
## Configuration
317+
### Configuration
318318
319319
Once Kubeapps is installed and the load balancer is ready, we need to go back to Keycloak to configure the callback URL:
320320
321321
- Navigate to the `kubeapps` Client
322322
- In the "Valid Redirect URIs" enter the callback URL for Kubeapps. It will be of the form "http://`<hostname>`/oauth2/callback" (where `<hostname>` is the load balancer hostname)
323323

324-
## Users
324+
### Users
325325

326326
Users created in Keycloak will be authenticated but they will not have access to the cluster resources by default. Make sure to create role bindings to users and/or groups in both clusters.

site/content/docs/latest/howto/OIDC/OAuth2OIDC-oauth2-proxy.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Kubeapps chart allows you to automatically deploy the proxy for you as a sidecar
1313
--set authProxy.extraFlags="{<other flags>,--proxy-prefix=/subpath/oauth2}"\
1414
```
1515

16-
**Example 1: Using the OIDC provider**
16+
### Example 1: Using the OIDC provider
1717

1818
This example uses `oauth2-proxy`'s generic OIDC provider with Google, but is applicable to any OIDC provider such as Keycloak, Dex, Okta or Azure Active Directory etc. Note that the issuer url is passed as an additional flag here, together with an option to enable the cookie being set over an insecure connection for local development only:
1919

@@ -28,7 +28,7 @@ helm install kubeapps bitnami/kubeapps \
2828
--set authProxy.extraFlags="{--cookie-secure=false,--oidc-issuer-url=https://accounts.google.com}" \
2929
```
3030

31-
**Example 2: Using a custom oauth2-proxy provider**
31+
### Example 2: Using a custom oauth2-proxy provider
3232

3333
Some of the specific providers that come with `oauth2-proxy` are using OpenIDConnect to obtain the required IDToken and can be used instead of the generic oidc provider. Currently this includes only the GitLab, Google and LoginGov providers (see [OAuth2_Proxy's provider configuration](https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/overview) for the full list of OAuth2 providers). The user authentication flow is the same as above, with some small UI differences, such as the default login button is customized to the provider (rather than "Login with OpenID Connect"), or improved presentation when accepting the requested scopes (as is the case with Google, but only visible if you request extra scopes).
3434

@@ -45,7 +45,7 @@ helm install kubeapps bitnami/kubeapps \
4545
--set authProxy.extraFlags="{--cookie-secure=false}"
4646
```
4747

48-
**Example 3: Authentication for Kubeapps on a GKE cluster**
48+
### Example 3: Authentication for Kubeapps on a GKE cluster
4949

5050
Google Kubernetes Engine does not allow an OIDC IDToken to be used to authenticate requests to the managed API server, instead requiring the standard OAuth2 access token.
5151
For this reason, when deploying Kubeapps on GKE we need to ensure that

site/content/docs/latest/howto/OIDC/using-an-OIDC-provider-with-pinniped.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ But, what if this `kube-controller-manager` is not a normal pod on a schedulable
8888

8989
In managed clusters, such as AKS, Pinniped cannot read the cluster's certificate and key. In this case, Pinniped will have a fallback mechanism: the [impersonation proxy](https://pinniped.dev/docs/background/architecture/). It simply creates a LoadBalancer service that proxies the actual Kubernetes API. For this reason, when using Kubeapps in managed clusters using Pinniped, you'll need to use the Impersonation Proxy URL (and CA certificate) instead of the usual k8s API server URL.
9090

91-
Assuming you have successfully [installed Pinniped](#installing-pinniped) and configured the [JWTAuthenticator](#configure-pinniped-to-trust-your-oidc-identity-provider), you have to retrieve the Impersonation Proxy URL and CA by inspecting the `CredentialIssuer` object. To do so, you can run the following commands:
91+
Assuming you have successfully [installed Pinniped](#installing-pinniped-concierge) and configured the [JWTAuthenticator](#configure-pinniped-concierge-to-trust-your-oidc-identity-provider), you have to retrieve the Impersonation Proxy URL and CA by inspecting the `CredentialIssuer` object. To do so, you can run the following commands:
9292

9393
Retrieving the Impersonation Proxy URL:
9494

site/content/docs/latest/howto/private-app-repository.md

Lines changed: 36 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -40,9 +40,9 @@ To install a Harbor registry in the cluster:
4040

4141
2. Update the following parameter in the deployment values:
4242

43-
- `service.tls.enabled`: Set to `false` to deactivate the TLS settings. Alternatively, you can provide a valid TSL certificate (check [Bitnami Harbor Helm chart documentation](https://github.com/bitnami/charts/tree/main/bitnami/harbor#parameters) for more information).
43+
- `service.tls.enabled`: Set to `false` to deactivate the TLS settings. Alternatively, you can provide a valid TSL certificate (check [Bitnami Harbor Helm chart documentation](https://github.com/bitnami/charts/tree/main/bitnami/harbor#parameters) for more information).
4444

45-
![Harbor Deploy Form](../img/harbor-deploy-form.png)
45+
![Harbor Deploy Form](../img/harbor-deploy-form.png)
4646

4747
3. Deploy the chart and wait for it to be ready.
4848

@@ -56,40 +56,40 @@ To install a Harbor registry in the cluster:
5656
5757
1. First, create a Helm chart package:
5858

59-
```console
60-
$ helm package /path/to/my/chart
61-
Successfully packaged chart and saved it to: /path/to/my/chart/my-chart-1.0.0.tgz
62-
```
59+
```console
60+
$ helm package /path/to/my/chart
61+
Successfully packaged chart and saved it to: /path/to/my/chart/my-chart-1.0.0.tgz
62+
```
6363

6464
2. Second, login into Harbor admin portal following the instructions in the chart notes:
6565

66-
```console
67-
1. Get the Harbor URL:
66+
```console
67+
1. Get the Harbor URL:
6868

69-
echo "Harbor URL: https://127.0.0.1:8080/"
70-
kubectl port-forward --namespace default svc/my-harbor 8080:80 &
69+
echo "Harbor URL: https://127.0.0.1:8080/"
70+
kubectl port-forward --namespace default svc/my-harbor 8080:80 &
7171

72-
2. Login with the following credentials to see your Harbor application
72+
2. Login with the following credentials to see your Harbor application
7373

74-
echo Username: "admin"
75-
echo Password: $(kubectl get secret --namespace default my-harbor-core-envvars -o jsonpath="{.data.HARBOR_ADMIN_PASSWORD}" | base64 --decode)
76-
```
74+
echo Username: "admin"
75+
echo Password: $(kubectl get secret --namespace default my-harbor-core-envvars -o jsonpath="{.data.HARBOR_ADMIN_PASSWORD}" | base64 --decode)
76+
```
7777

7878
3. Create a new Project named **my-helm-repo**. Each project will serve as a Package repository (in this example, a Helm chart repository).
7979

80-
![Harbor new project](../img/harbor-new-project.png)
80+
![Harbor new project](../img/harbor-new-project.png)
8181

82-
- It is possible to configure Harbor to use HTTP basic authentication if you set the `Access Level` of the project to `non public`. This enforces authentication to access the packages in the repository from an external client (Helm CLI, Kubeapps or any other).
82+
- It is possible to configure Harbor to use HTTP basic authentication if you set the `Access Level` of the project to `non public`. This enforces authentication to access the packages in the repository from an external client (Helm CLI, Kubeapps or any other).
8383

8484
4. Click the project name to view the project details page, then click **Helm Charts** tab to list all helm charts.
8585

86-
![Harbor list charts](../img/harbor-list-charts.png)
86+
![Harbor list charts](../img/harbor-list-charts.png)
8787

8888
5. Click **Upload** button to upload the Helm chart you previously created. You can also use the `helm` command to upload the chart too.
8989

90-
![Harbor upload chart](../img/harbor-upload-chart.png)
90+
![Harbor upload chart](../img/harbor-upload-chart.png)
9191

92-
> Please refer to ['Manage Helm Charts in Harbor'](https://goharbor.io/docs/2.6.0/working-with-projects/working-with-images/managing-helm-charts) for more details.
92+
> Please refer to ['Manage Helm Charts in Harbor'](https://goharbor.io/docs/2.6.0/working-with-projects/working-with-images/managing-helm-charts) for more details.
9393
9494
### Harbor: Configure the repository in Kubeapps
9595

@@ -156,16 +156,16 @@ To use ChartMuseum with Kubeapps:
156156

157157
1. First configure a public repo in Kubeapps to deploy its Helm chart from the `stable` repository:
158158

159-
![ChartMuseum chart](../img/chartmuseum-chart.png)
159+
![ChartMuseum chart](../img/chartmuseum-chart.png)
160160

161161
2. Deploy last version by using Kubeapps. Update the following parameters in the deployment values:
162162

163-
- `env.open.DISABLE_API`: Set to `false` to use the ChartMuseum API to push new charts.
164-
- `persistence.enabled`: Set to `true` to enable persistence for the stored charts.
163+
- `env.open.DISABLE_API`: Set to `false` to use the ChartMuseum API to push new charts.
164+
- `persistence.enabled`: Set to `true` to enable persistence for the stored charts.
165165

166-
> Note that this will create a [Kubernetes Persistent Volume Claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#lifecycle-of-a-volume-and-claim) so depending on your Kubernetes provider you may need to manually allocate the required Persistent Volume to satisfy the claim. Some Kubernetes providers will automatically create PVs for you so setting this value to `true` will be enough.
166+
> Note that this will create a [Kubernetes Persistent Volume Claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#lifecycle-of-a-volume-and-claim) so depending on your Kubernetes provider you may need to manually allocate the required Persistent Volume to satisfy the claim. Some Kubernetes providers will automatically create PVs for you so setting this value to `true` will be enough.
167167
168-
![ChartMuseum Deploy Form](../img/chartmuseum-deploy-form.png)
168+
![ChartMuseum Deploy Form](../img/chartmuseum-deploy-form.png)
169169

170170
### ChartMuseum: Upload a chart
171171

@@ -175,21 +175,21 @@ Once ChartMuseum is deployed you will be able to upload a chart.
175175

176176
1. In one terminal open a port-forward tunnel to the application:
177177

178-
```console
179-
$ export POD_NAME=$(kubectl get pods --namespace default -l "app=chartmuseum" -l "release=my-chartrepo" -o jsonpath="{.items[0].metadata.name}")
180-
$ kubectl port-forward $POD_NAME 8080:8080 --namespace default
181-
Forwarding from 127.0.0.1:8080 -> 8080
182-
Forwarding from [::1]:8080 -> 8080
183-
```
178+
```console
179+
$ export POD_NAME=$(kubectl get pods --namespace default -l "app=chartmuseum" -l "release=my-chartrepo" -o jsonpath="{.items[0].metadata.name}")
180+
$ kubectl port-forward $POD_NAME 8080:8080 --namespace default
181+
Forwarding from 127.0.0.1:8080 -> 8080
182+
Forwarding from [::1]:8080 -> 8080
183+
```
184184

185185
2. In a different terminal you can push your chart:
186186

187-
```console
188-
$ helm package /path/to/my/chart
189-
Successfully packaged chart and saved it to: /path/to/my/chart/my-chart-1.0.0.tgz
190-
curl --data-binary "@my-chart-1.0.0.tgz" http://localhost:8080/api/charts
191-
{"saved":true}
192-
```
187+
```console
188+
$ helm package /path/to/my/chart
189+
Successfully packaged chart and saved it to: /path/to/my/chart/my-chart-1.0.0.tgz
190+
curl --data-binary "@my-chart-1.0.0.tgz" http://localhost:8080/api/charts
191+
{"saved":true}
192+
```
193193

194194
### ChartMuseum: Authentication/Authorization
195195

site/content/docs/latest/reference/developer/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,18 @@
11
# The Kubeapps Components
22

3-
### Kubeapps dashboard
3+
## Kubeapps dashboard
44

55
The dashboard is the main UI component of the Kubeapps project. Written in JavaScript, the dashboard uses the React JavaScript library for the frontend.
66

77
Please refer to the [Kubeapps Dashboard Developer Guide](./dashboard.md) for the developer setup.
88

9-
### Kubeapps APIs service
9+
## Kubeapps APIs service
1010

1111
The Kubeapps APIs service is the main backend component of the Kubeapps project. Written in Go, the APIs service provides a pluggable gRPC service that is used to support different Kubernetes packaging formats.
1212

1313
See the [Kubeapps APIs Service Developer Guide](kubeapps-apis.md) for more information.
1414

15-
### asset-syncer
15+
## asset-syncer
1616

1717
The `asset-syncer` component is a tool that scans a Helm chart repository and populates chart metadata in the database. This metadata is then served by the `kubeapps-apis` component.
1818

site/content/docs/latest/reference/developer/apprepository-controller.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,8 +29,7 @@ Based off the [Kubernetes Sample Controller](https://github.com/kubernetes/sampl
2929
- [Kubernetes cluster (v1.8+)](https://kubernetes.io/docs/setup/)
3030
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
3131
- [Telepresence](https://telepresence.io)
32-
33-
_Telepresence is not a hard requirement, but is recommended for a better developer experience_
32+
- _Telepresence is not a hard requirement, but is recommended for a better developer experience_
3433
3534
## Download the kubeapps source code
3635

site/content/docs/latest/reference/developer/asset-syncer.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ The `asset-syncer` component is a tool that scans a Helm chart repository and po
1111
- [Kubernetes cluster (v1.8+)](https://kubernetes.io/docs/setup/). [Minikube](https://github.com/kubernetes/minikube) is recommended.
1212
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
1313
- [Telepresence](https://telepresence.io)
14+
- _Telepresence is not a hard requirement, but is recommended for a better developer experience_
1415

1516
## Download the Kubeapps source code
1617

site/content/docs/latest/reference/developer/dashboard.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,7 @@ The dashboard is the main UI component of the Kubeapps project. Written in JavaS
1111
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
1212
- [Docker CE](https://www.docker.com/community-edition)
1313
- [Telepresence](https://telepresence.io)
14-
15-
_Telepresence is not a hard requirement, but is recommended for a better developer experience_
14+
- _Telepresence is not a hard requirement, but is recommended for a better developer experience_
1615

1716
## Download the kubeapps source code
1817

0 commit comments

Comments
 (0)