-
Notifications
You must be signed in to change notification settings - Fork 401
[velero] Update velero to v1.17.0 #709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
a74fea9
1eb0606
27b9155
4a179e6
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -27,7 +27,7 @@ namespace: | |
| # enabling node-agent). Required. | ||
| image: | ||
| repository: velero/velero | ||
| tag: v1.16.2 | ||
| tag: v1.17.0 | ||
| # Digest value example: sha256:d238835e151cec91c6a811fe3a89a66d3231d9f64d09e5f3c49552672d271f38. | ||
| # If used, it will take precedence over the image.tag. | ||
| # digest: | ||
|
|
@@ -130,7 +130,7 @@ dnsPolicy: ClusterFirst | |
| # If the value is a string then it is evaluated as a template. | ||
| initContainers: | ||
| # - name: velero-plugin-for-aws | ||
| # image: velero/velero-plugin-for-aws:v1.12.2 | ||
| # image: velero/velero-plugin-for-aws:v1.13.0 | ||
| # imagePullPolicy: IfNotPresent | ||
| # volumeMounts: | ||
| # - mountPath: /target | ||
|
|
@@ -512,14 +512,6 @@ configuration: | |
| # Resource requests/limits to specify for the repository-maintenance job. Optional. | ||
| # https://velero.io/docs/v1.14/repository-maintenance/#resource-limitation | ||
| repositoryMaintenanceJob: | ||
| requests: | ||
| # cpu: 500m | ||
| # memory: 512Mi | ||
| limits: | ||
| # cpu: 1000m | ||
| # memory: 1024Mi | ||
| # Number of latest maintenance jobs to keep for each repository | ||
| latestJobsCount: 3 | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Since this was set as default, shouldn't this be added as a default in the ConfigMap, as it stands in this default is simply now lost. Could be added here: https://github.com/vmware-tanzu/helm-charts/blob/main/charts/velero/values.yaml#L550 Also all these parameters could be kept in the values and simply be used to manage the global
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'll add the default in the configmap. As for supporting previous There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That's good enough for me, with that change we keep the current default behavior of the chart |
||
| # Per-repository resource settings ConfigMap | ||
| # This ConfigMap allows specifying different settings for different repositories | ||
| # See: https://velero.io/docs/main/repository-maintenance/ | ||
|
|
@@ -547,7 +539,8 @@ configuration: | |
| # operator: "In" | ||
| # values: ["us-central1-a", "us-central1-b", "us-central1-c"] | ||
| # priorityClassName: "low-priority" # Note: priorityClassName is only supported in global configuration | ||
| global: {} | ||
| global: | ||
| keepLatestMaintenanceJobs: 3 | ||
| # Repository-specific configurations | ||
| # Repository keys are formed as: "{namespace}-{storageLocation}-{repositoryType}" | ||
| # For example: "default-default-kopia" or "prod-s3-backup-kopia" | ||
|
|
@@ -791,7 +784,7 @@ schedules: {} | |
| # velero.io/plugin-config: "" | ||
| # velero.io/pod-volume-restore: RestoreItemAction | ||
| # data: | ||
| # image: velero/velero-restore-helper:v1.10.2 | ||
| # image: velero/velero:v1.17.0 | ||
| # cpuRequest: 200m | ||
| # memRequest: 128Mi | ||
| # cpuLimit: 200m | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am thinking if there is any better way to handle this or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now, I don't think there is. See the current
node-agentds: https://github.com/vmware-tanzu/helm-charts/blob/main/charts/velero/templates/node-agent-daemonset.yaml#L7Also
velerodeployment is hardcoded through velero codebase itself:Maintenance jobs and pod volume restores are dependent on it.
Of course if you wish I can revert this change, but if we allow creating any other deployment than
velero, then we essentially allow shipping broken config. And in current iteration is not possible to have two instances of velero in the same cluster working (since either the deployment name is wrong if fullname is notveleroor clusterroles overlap)