You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 13, 2026. It is now read-only.
Copy file name to clipboardExpand all lines: site/README.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ This site uses [Hugo](https://github.com/gohugoio/hugo) for rendering. It is rec
10
10
11
11
### Local Hugo Rendering
12
12
13
-
Hugo is available for many platforms. It can be installed using:
13
+
Hugo is available on many platforms. It can be installed using:
14
14
15
15
- Linux: Most native package managers
16
16
- macOS: `brew install hugo`
@@ -25,14 +25,14 @@ hugo server --disableFastRender
25
25
26
26
Access the site at [http://localhost:1313](http://localhost:1313). Press `Ctrl-C` when done viewing.
27
27
28
-
The [site/content/docs/latest](./content/docs/latest) directory holds the project documentation whereas [site/themes/template/static../img/docs](./themes/template/static../img/docs) contains the images used in the documentation. Note they have to be under that folder to be properly served.
28
+
The [site/content/docs/latest](./content/docs/latest) directory holds the project documentation whereas the [site/themes/template/static../img/docs](./themes/template/static../img/docs) directory contains the images used in the documentation. Note they have to be under that folder to be properly served.
29
29
30
30
#### Run Hugo with Docker
31
31
32
32
To ease the local development and prevent you from polluting your local environment with tools that rarely use,
33
33
it is possible to run the `Hugo` server via `Docker` through a `Make` target.
34
34
35
-
```
35
+
```bash
36
36
make site-server
37
37
```
38
38
@@ -60,9 +60,9 @@ hugo
60
60
npx check-html-links ./public/
61
61
```
62
62
63
-
## Check formatting
63
+
## Check format
64
64
65
-
Also, another tool for checking the markdown syntax are[markdownlint-cli](https://github.com/igorshubovych/markdownlint-cli) and [prettier](https://github.com/prettier/prettier). To use them, run:
65
+
Also, another tool for checking the markdown syntax is[markdownlint-cli](https://github.com/igorshubovych/markdownlint-cli) and [prettier](https://github.com/prettier/prettier). To use them, run:
Copy file name to clipboardExpand all lines: site/content/docs/latest/howto/OIDC/OAuth2OIDC-keycloak.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,9 +5,9 @@ It covers the installation and documentation for Kubeapps interacting with two K
5
5
6
6
The installation used the [bitnami chart for Keycloak](https://github.com/bitnami/charts/tree/main/bitnami/keycloak) (version 12.0.4/2.4.8) and [bitnami chart for Kubeapps](https://github.com/bitnami/charts/tree/main/bitnami/kubeapps) (version 7.0.0/2.3.2)
7
7
8
-
# Keycloak Installation
8
+
##Keycloak Installation
9
9
10
-
## SSL
10
+
###SSL
11
11
12
12
In order to support OIDC or OAuth, most servers and proxies require HTTPS. By default, the certificate created by the helm chart / Keycloak server is both invalid (error with `notBefore` attribute) and also based on a deprecated certificate version making it incompatible to use (i.e. is it based on Common Name instead of SAN and is rejected).
13
13
@@ -82,7 +82,7 @@ Note that the names of the keystore and truststore matters and must be exactly a
To provide a default install, not many values must be provided in the values file - the values are mostly default passwords and the name of the secret created in Step 3 above.
88
88
@@ -131,13 +131,13 @@ Then just deploy Keycloak either using Kubeapps UI or helm cli as follows:
Follow the [Keycloak documentation](https://www.keycloak.org/documentation) to create and configure a new Realm to work with.
137
137
138
138
This section will focus on a few aspects to configure for the SSO scenario to work.
139
139
140
-
## Groups Claim
140
+
###Groups Claim
141
141
142
142
By default, there is no "groups" scope/claim. We will create a global client scope for groups.
143
143
@@ -161,7 +161,7 @@ Once the client scope is created, you should be redirected to a page with severa
161
161
162
162
Note: if you navigate to "Client Scopes" and then select the tab "Default Client Scopes" you should be able to see the newly created "groups" scope in the "available client scopes" lists.
163
163
164
-
## Clients
164
+
###Clients
165
165
166
166
In probably a very simplified view, Clients represent the application to be protected and accessed via SSO and OIDC. Here, the environment consisted of the Kubeapps web app and two Kubernetes clusters. So we need to create three clients.
167
167
@@ -201,7 +201,7 @@ Once created, configure the authentication as follows:
201
201
- Configure the "Access Type" to be "confidential". This will add a new "Credentials" tab from which you can get the client secret
202
202
- Ensure "Standard Flow Enabled" is enabled, this is required for the login screen.
203
203
- "Direct Access Grants Enabled" can be disabled.
204
-
- In the "Valid Redirect URIs" field, enter "http://localhost:8000/\*" as a placeholder. We will need to revisit this field once we know the public hostname of kubeapps
204
+
- In the "Valid Redirect URIs" field, enter `http://localhost:8000/\*` as a placeholder. We will need to revisit this field once we know the public hostname of kubeapps
205
205
- Save
206
206
207
207
As for the cluster clients, we need to configure the client scopes:
@@ -252,17 +252,17 @@ In this option, the claim is statically defined via a mapper similar to the one
252
252
253
253
The two client ids will be injected in the audience claim automatically.
254
254
255
-
## Users
255
+
###Users in Keycloak
256
256
257
257
Users are intuitive to create. But they must be configured with a "verified" email address.
258
258
259
259
The oauth proxy used in kubeapps requires email as the username. Furthermore, if the email is not marked as verified, JWT validation will fail and authentication will fail.
260
260
261
261
In order to test multiple users with different levels of authorization, it is useful to create them with multiple dummy email addresses. This can be done by ensuring that when the user is created, the field "email verified" is ON (skipping an actual email verification workflow).
262
262
263
-
# Kubeapps Installation
263
+
##Kubeapps Installation
264
264
265
-
## Helm Install
265
+
### Kubeapps Helm Install
266
266
267
267
Few changes are required to values.yaml for the helm installation:
Once Kubeapps is installed and the load balancer is ready, we need to go back to Keycloak to configure the callback URL:
320
320
321
321
- Navigate to the `kubeapps` Client
322
322
- In the "Valid Redirect URIs" enter the callback URL for Kubeapps. It will be of the form "http://`<hostname>`/oauth2/callback" (where `<hostname>` is the load balancer hostname)
323
323
324
-
## Users
324
+
### Users
325
325
326
326
Users created in Keycloak will be authenticated but they will not have access to the cluster resources by default. Make sure to create role bindings to users and/or groups in both clusters.
This example uses `oauth2-proxy`'s generic OIDC provider with Google, but is applicable to any OIDC provider such as Keycloak, Dex, Okta or Azure Active Directory etc. Note that the issuer url is passed as an additional flag here, together with an option to enable the cookie being set over an insecure connection for local development only:
**Example 2: Using a custom oauth2-proxy provider**
31
+
### Example 2: Using a custom oauth2-proxy provider
32
32
33
33
Some of the specific providers that come with `oauth2-proxy` are using OpenIDConnect to obtain the required IDToken and can be used instead of the generic oidc provider. Currently this includes only the GitLab, Google and LoginGov providers (see [OAuth2_Proxy's provider configuration](https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/overview) for the full list of OAuth2 providers). The user authentication flow is the same as above, with some small UI differences, such as the default login button is customized to the provider (rather than "Login with OpenID Connect"), or improved presentation when accepting the requested scopes (as is the case with Google, but only visible if you request extra scopes).
**Example 3: Authentication for Kubeapps on a GKE cluster**
48
+
### Example 3: Authentication for Kubeapps on a GKE cluster
49
49
50
50
Google Kubernetes Engine does not allow an OIDC IDToken to be used to authenticate requests to the managed API server, instead requiring the standard OAuth2 access token.
51
51
For this reason, when deploying Kubeapps on GKE we need to ensure that
Copy file name to clipboardExpand all lines: site/content/docs/latest/howto/OIDC/using-an-OIDC-provider-with-pinniped.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -88,7 +88,7 @@ But, what if this `kube-controller-manager` is not a normal pod on a schedulable
88
88
89
89
In managed clusters, such as AKS, Pinniped cannot read the cluster's certificate and key. In this case, Pinniped will have a fallback mechanism: the [impersonation proxy](https://pinniped.dev/docs/background/architecture/). It simply creates a LoadBalancer service that proxies the actual Kubernetes API. For this reason, when using Kubeapps in managed clusters using Pinniped, you'll need to use the Impersonation Proxy URL (and CA certificate) instead of the usual k8s API server URL.
90
90
91
-
Assuming you have successfully [installed Pinniped](#installing-pinniped) and configured the [JWTAuthenticator](#configure-pinniped-to-trust-your-oidc-identity-provider), you have to retrieve the Impersonation Proxy URL and CA by inspecting the `CredentialIssuer` object. To do so, you can run the following commands:
91
+
Assuming you have successfully [installed Pinniped](#installing-pinniped-concierge) and configured the [JWTAuthenticator](#configure-pinniped-concierge-to-trust-your-oidc-identity-provider), you have to retrieve the Impersonation Proxy URL and CA by inspecting the `CredentialIssuer` object. To do so, you can run the following commands:
Copy file name to clipboardExpand all lines: site/content/docs/latest/howto/private-app-repository.md
+36-36Lines changed: 36 additions & 36 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,9 +40,9 @@ To install a Harbor registry in the cluster:
40
40
41
41
2. Update the following parameter in the deployment values:
42
42
43
-
-`service.tls.enabled`: Set to `false` to deactivate the TLS settings. Alternatively, you can provide a valid TSL certificate (check [Bitnami Harbor Helm chart documentation](https://github.com/bitnami/charts/tree/main/bitnami/harbor#parameters) for more information).
43
+
-`service.tls.enabled`: Set to `false` to deactivate the TLS settings. Alternatively, you can provide a valid TSL certificate (check [Bitnami Harbor Helm chart documentation](https://github.com/bitnami/charts/tree/main/bitnami/harbor#parameters) for more information).
3. Create a new Project named **my-helm-repo**. Each project will serve as a Package repository (in this example, a Helm chart repository).
79
79
80
-

80
+

81
81
82
-
- It is possible to configure Harbor to use HTTP basic authentication if you set the `Access Level` of the project to `non public`. This enforces authentication to access the packages in the repository from an external client (Helm CLI, Kubeapps or any other).
82
+
- It is possible to configure Harbor to use HTTP basic authentication if you set the `Access Level` of the project to `non public`. This enforces authentication to access the packages in the repository from an external client (Helm CLI, Kubeapps or any other).
83
83
84
84
4. Click the project name to view the project details page, then click **Helm Charts** tab to list all helm charts.
85
85
86
-

86
+

87
87
88
88
5. Click **Upload** button to upload the Helm chart you previously created. You can also use the `helm` command to upload the chart too.
> Please refer to ['Manage Helm Charts in Harbor'](https://goharbor.io/docs/2.6.0/working-with-projects/working-with-images/managing-helm-charts) for more details.
92
+
> Please refer to ['Manage Helm Charts in Harbor'](https://goharbor.io/docs/2.6.0/working-with-projects/working-with-images/managing-helm-charts) for more details.
93
93
94
94
### Harbor: Configure the repository in Kubeapps
95
95
@@ -156,16 +156,16 @@ To use ChartMuseum with Kubeapps:
156
156
157
157
1. First configure a public repo in Kubeapps to deploy its Helm chart from the `stable` repository:
2. Deploy last version by using Kubeapps. Update the following parameters in the deployment values:
162
162
163
-
-`env.open.DISABLE_API`: Set to `false` to use the ChartMuseum API to push new charts.
164
-
-`persistence.enabled`: Set to `true` to enable persistence for the stored charts.
163
+
-`env.open.DISABLE_API`: Set to `false` to use the ChartMuseum API to push new charts.
164
+
-`persistence.enabled`: Set to `true` to enable persistence for the stored charts.
165
165
166
-
> Note that this will create a [Kubernetes Persistent Volume Claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#lifecycle-of-a-volume-and-claim) so depending on your Kubernetes provider you may need to manually allocate the required Persistent Volume to satisfy the claim. Some Kubernetes providers will automatically create PVs for you so setting this value to `true` will be enough.
166
+
> Note that this will create a [Kubernetes Persistent Volume Claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#lifecycle-of-a-volume-and-claim) so depending on your Kubernetes provider you may need to manually allocate the required Persistent Volume to satisfy the claim. Some Kubernetes providers will automatically create PVs for you so setting this value to `true` will be enough.
Copy file name to clipboardExpand all lines: site/content/docs/latest/reference/developer/README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,18 +1,18 @@
1
1
# The Kubeapps Components
2
2
3
-
###Kubeapps dashboard
3
+
## Kubeapps dashboard
4
4
5
5
The dashboard is the main UI component of the Kubeapps project. Written in JavaScript, the dashboard uses the React JavaScript library for the frontend.
6
6
7
7
Please refer to the [Kubeapps Dashboard Developer Guide](./dashboard.md) for the developer setup.
8
8
9
-
###Kubeapps APIs service
9
+
## Kubeapps APIs service
10
10
11
11
The Kubeapps APIs service is the main backend component of the Kubeapps project. Written in Go, the APIs service provides a pluggable gRPC service that is used to support different Kubernetes packaging formats.
12
12
13
13
See the [Kubeapps APIs Service Developer Guide](kubeapps-apis.md) for more information.
14
14
15
-
###asset-syncer
15
+
## asset-syncer
16
16
17
17
The `asset-syncer` component is a tool that scans a Helm chart repository and populates chart metadata in the database. This metadata is then served by the `kubeapps-apis` component.
0 commit comments