[velero] Update velero to v1.17.0#709
Conversation
|
just some info - i believe there were some additional breaking changes / deprecations that were introduced in Breaking changes notes:
Deployment reference: https://github.com/vmware-tanzu/helm-charts/blob/main/charts/velero/templates/deployment.yaml#L188-L203 Release: https://github.com/vmware-tanzu/velero/releases/tag/v1.17.0 |
Fixed and added validations, I was looking at https://velero.io/docs/v1.17/upgrade-to-1.17/ instead... 😅 |
| # cpu: 1000m | ||
| # memory: 1024Mi | ||
| # Number of latest maintenance jobs to keep for each repository | ||
| latestJobsCount: 3 |
There was a problem hiding this comment.
Since this was set as default, shouldn't this be added as a default in the ConfigMap, as it stands in this default is simply now lost. Could be added here: https://github.com/vmware-tanzu/helm-charts/blob/main/charts/velero/values.yaml#L550
Also all these parameters could be kept in the values and simply be used to manage the global repositoryConfigData
There was a problem hiding this comment.
I'll add the default in the configmap.
As for supporting previous requests and limits fields this would require parsing and merging logic since global is as of now rendered as-is, so I'll wait for a maintainer to have a say about this, TBH I'm not convinced if it's worth it
There was a problem hiding this comment.
That's good enough for me, with that change we keep the current default behavior of the chart
Signed-off-by: Jakub Jaruszewski <jjaruszewski@man.poznan.pl> # Conflicts: # charts/velero/Chart.yaml
Signed-off-by: Jakub Jaruszewski <jjaruszewski@man.poznan.pl>
Signed-off-by: Jakub Jaruszewski <jjaruszewski@man.poznan.pl>
Signed-off-by: Jakub Jaruszewski <jjaruszewski@man.poznan.pl>
d59db2c to
4a179e6
Compare
|
Rebase ontop of If deployment is not velero, restores fail with: fixes #679 |
There was a problem hiding this comment.
Pull Request Overview
This PR updates the Velero Helm chart from version 1.16.2 to 1.17.0, including image updates, CRD schema changes, and configuration modernization.
- Updated Velero image tag from v1.16.2 to v1.17.0
- Replaced deprecated repository maintenance job configuration with configmap-based approach
- Updated CRDs with new fields and improved status reporting
Reviewed Changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| charts/velero/Chart.yaml | Bumped chart version to 11.0.0 and appVersion to 1.17.0 |
| charts/velero/values.yaml | Updated image tags and migrated repository maintenance configuration |
| charts/velero/templates/deployment.yaml | Simplified deployment name and removed deprecated CLI arguments |
| charts/velero/templates/NOTES.txt | Added validation checks for removed configuration options |
| charts/velero/crds/*.yaml | Updated CRD schemas with new fields and improved status columns |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| kind: Deployment | ||
| metadata: | ||
| name: {{ include "velero.fullname" . }} | ||
| name: velero |
There was a problem hiding this comment.
I am thinking if there is any better way to handle this or not.
There was a problem hiding this comment.
For now, I don't think there is. See the current node-agent ds: https://github.com/vmware-tanzu/helm-charts/blob/main/charts/velero/templates/node-agent-daemonset.yaml#L7
Also velero deployment is hardcoded through velero codebase itself:
- https://github.com/vmware-tanzu/velero/blob/2eb97fa8b187f9ed0aeb49f216565eddf93a0b08/pkg/repository/maintenance/maintenance.go#L414
- https://github.com/vmware-tanzu/velero/blob/1ec281a64eae4dbd37c7ddf5f6b0ba4a9f5678da/pkg/restore/actions/pod_volume_restore_action.go#L64
Maintenance jobs and pod volume restores are dependent on it.
Of course if you wish I can revert this change, but if we allow creating any other deployment than velero, then we essentially allow shipping broken config. And in current iteration is not possible to have two instances of velero in the same cluster working (since either the deployment name is wrong if fullname is not velero or clusterroles overlap)
Special notes for your reviewer:
Updated image and crds to velero 1.17.0
Checklist
[velero])fixes #679