Skip to content

HelmChartInflationGenerator ignores releaseNamespace  #4593

@in-cloud-opensource

Description

@in-cloud-opensource

Describe the bug

According to the documentation of the helm inflator it shall be possible to set the release namespace of the helm template.
When running kustomize build --enable-helm . with the kustomization.ymal below the releaseNamespace is ignored.

Files that can reproduce the issue

Example:

kustomization.yaml

helmChartInflationGenerator:
- chartName: nats
  chartVersion: v0.13.1
  chartRepoUrl: https://nats-io.github.io/k8s/helm/charts
  releaseName: nats
  releaseNamespace: XXXXXX

Expected output

Expected output e.g. for ConfigMap, namesapce set to XXXXXX

apiVersion: v1
data:
  nats.conf: |
    # NATS Clients Port
    port: 4222

    # PID file shared with configuration reloader.
    pid_file: "/var/run/nats/nats.pid"

    ###############
    #             #
    # Monitoring  #
    #             #
    ###############
    http: 8222
    server_name:$POD_NAME
    lame_duck_duration: 120s
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nats
    app.kubernetes.io/version: 2.7.2
    helm.sh/chart: nats-0.13.1
  name: nats-config
  namespace: XXXXXX

Actual output

apiVersion: v1
data:
  nats.conf: |
    # NATS Clients Port
    port: 4222

    # PID file shared with configuration reloader.
    pid_file: "/var/run/nats/nats.pid"

    ###############
    #             #
    # Monitoring  #
    #             #
    ###############
    http: 8222
    server_name:$POD_NAME
    lame_duck_duration: 120s
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nats
    app.kubernetes.io/version: 2.7.2
    helm.sh/chart: nats-0.13.1
  name: nats-config
  namespace: default
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nats
    app.kubernetes.io/version: 2.7.2
    helm.sh/chart: nats-0.13.1
  name: nats
  namespace: default
spec:
  clusterIP: None
  ports:
  - name: client
    port: 4222
  - name: cluster
    port: 6222
  - name: monitor
    port: 8222
  - name: metrics
    port: 7777
  - name: leafnodes
    port: 7422
  - name: gateways
    port: 7522
  selector:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/name: nats
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nats-box
    chart: nats-0.13.1
  name: nats-box
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nats-box
  template:
    metadata:
      labels:
        app: nats-box
    spec:
      containers:
      - command:
        - tail
        - -f
        - /dev/null
        env:
        - name: NATS_URL
          value: nats
        image: natsio/nats-box:0.8.1
        imagePullPolicy: IfNotPresent
        name: nats-box
        resources: null
        volumeMounts: null
      volumes: null
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nats
    app.kubernetes.io/version: 2.7.2
    helm.sh/chart: nats-0.13.1
  name: nats
  namespace: default
spec:
  podManagementPolicy: Parallel
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: nats
      app.kubernetes.io/name: nats
  serviceName: nats
  template:
    metadata:
      annotations:
        prometheus.io/path: /metrics
        prometheus.io/port: "7777"
        prometheus.io/scrape: "true"
      labels:
        app.kubernetes.io/instance: nats
        app.kubernetes.io/name: nats
    spec:
      containers:
      - command:
        - nats-server
        - --config
        - /etc/nats-config/nats.conf
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: SERVER_NAME
          value: $(POD_NAME)
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: CLUSTER_ADVERTISE
          value: $(POD_NAME).nats.$(POD_NAMESPACE).svc.cluster.local
        image: nats:2.7.2-alpine
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /bin/sh
              - -c
              - nats-server -sl=ldm=/var/run/nats/nats.pid
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 8222
          initialDelaySeconds: 10
          periodSeconds: 60
          successThreshold: 1
          timeoutSeconds: 5
        name: nats
        ports:
        - containerPort: 4222
          name: client
        - containerPort: 7422
          name: leafnodes
        - containerPort: 7522
          name: gateways
        - containerPort: 6222
          name: cluster
        - containerPort: 8222
          name: monitor
        - containerPort: 7777
          name: metrics
        resources: {}
        startupProbe:
          failureThreshold: 30
          httpGet:
            path: /
            port: 8222
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        volumeMounts:
        - mountPath: /etc/nats-config
          name: config-volume
        - mountPath: /var/run/nats
          name: pid
      - command:
        - nats-server-config-reloader
        - -pid
        - /var/run/nats/nats.pid
        - -config
        - /etc/nats-config/nats.conf
        image: natsio/nats-server-config-reloader:0.6.2
        imagePullPolicy: IfNotPresent
        name: reloader
        resources: null
        volumeMounts:
        - mountPath: /etc/nats-config
          name: config-volume
        - mountPath: /var/run/nats
          name: pid
      - args:
        - -connz
        - -routez
        - -subz
        - -varz
        - -prefix=nats
        - -use_internal_server_id
        - http://localhost:8222/
        image: natsio/prometheus-nats-exporter:0.9.1
        imagePullPolicy: IfNotPresent
        name: metrics
        ports:
        - containerPort: 7777
          name: metrics
        resources: {}
      shareProcessNamespace: true
      terminationGracePeriodSeconds: 120
      volumes:
      - configMap:
          name: nats-config
        name: config-volume
      - emptyDir: {}
        name: pid
  volumeClaimTemplates: null
---
apiVersion: v1
kind: Pod
metadata:
  annotations:
    helm.sh/hook: test
  labels:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nats
    app.kubernetes.io/version: 2.7.2
    helm.sh/chart: nats-0.13.1
  name: nats-test-request-reply
spec:
  containers:
  - command:
    - /bin/sh
    - -ec
    - |
      nats reply -s nats://$NATS_HOST:4222 'name.>' --command "echo 1" &
    - |
      "&&"
    - |
      name=$(nats request -s nats://$NATS_HOST:4222 name.test '' 2>/dev/null)
    - |
      "&&"
    - |
      [ $name = test ]
    env:
    - name: NATS_HOST
      value: nats
    image: synadia/nats-box
    name: nats-box
  restartPolicy: Never

Kustomize version
{Version:kustomize/v4.5.3 GitCommit:de6b9784912a5c1df309e6ae9152b962be4eba47 BuildDate:2022-03-24T20:51:20Z GoOs:linux GoArch:amd64}

Platform
Linux

Additional context

It seems that the ReleaseNamespace arguments are missing in the mapping between HelmChartArgs and HelmChart see

func makeHelmChartFromHca(old *HelmChartArgs) (c HelmChart) {
	c.Name = old.ChartName
	c.Version = old.ChartVersion
	c.Repo = old.ChartRepoURL
	c.ValuesFile = old.Values
	c.ValuesInline = old.ValuesLocal
	c.ValuesMerge = old.ValuesMerge
	c.ReleaseName = old.ReleaseName
	return
}

Metadata

Metadata

Labels

good first issueDenotes an issue ready for a new contributor, according to the "help wanted" guidelines.help wantedDenotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.kind/bugCategorizes issue or PR as related to a bug.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions