Skip to content

Velero pod gets evicted because the node is running low on ephemeral-storage  #7718

@jeremyvdveen

Description

@jeremyvdveen

What steps did you take and what happened:

I'm facing a problem where the Velero pod gets evicted, because the node it was running on ran low on ephemeral-storage.
When I describe the pod:

The node was low on resource: ephemeral-storage. Threshold quantity: 7775265486, available: 7549392Ki. Container velero was using 4579368Ki, request is 0, has larger consumption of ephemeral-storage.

What did you expect to happen:
I would expect that the pod keeps running.

Anything else you would like to add:

I'm using version 6.0.0 of the Velero helm-chart.
I've done some investigation into this issue myself and found that the Restic cache that is stored in the scratch directory is causing the problem.
When I describe my backuprepository, I see the following error message:

error running command=restic prune --repo=s3:s3-eu-west-1.amazonaws.com/company-x/restic/application-x --password-file=/tmp/credentials/velero/velero-repo-credentials-repository-password --cache-dir=/scratch/.cache/restic, stdout=, stderr=unable to create lock in backend: repository is already locked exclusively by PID 156300 on velero-58b69fbd4f-78jkr by cnb (UID 1002, GID 1000)
    lock was created at 2024-03-13 23:27:01 (29m59.784499528s ago)
    storage ID 4174ac16
    the `unlock` command can be used to remove stale locks
    : exit status 1
  phase: Ready

We are running Velero for a lot of our customers, but we only see this behavior on two of our customer environments. It's also good to mention that the back-ups are still succeeding, but it's still undesired behavior.

I've also tried running the restic prune command manually and noticed a very rapid increase in cache-size, resulting in the pod getting evicted again.
Before starting the prune command the cache size was 132K, the pod got evicted when the cache was between 20G and 25G.
I would have liked to use a PVC, instead of an emptyDir for the scratch volume, as mentioned in #2087, but the helm-chart doesn't allow for that. I also tried pruning without using the cache, but as stated when running the command, it's very slow.
When going through the Github issues for this project, I've found a few related issues:
#7177
#2087

Environment:

  • Velero version: v1.13.0
  • Kubernetes version: v1.28.6
  • Cloud provider or hardware configuration: AWS - m6i.4xlarge
  • OS: Ubuntu 22.04.3

Vote on this issue!

This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.

  • 👍 for "I would like to see this bug fixed as soon as possible"
  • 👎 for "There are more important bugs to focus on right now"

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions