-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Description
Describe the problem as clearly as you can
We're experiencing random network outages, DNS outages, etc. which are out of our control because we didn't set up the network. So bundle install frequently fails. This is reasonably workable when developing locally because I can just run it again and it won't re-fetch the gems it already has. When building a Docker image, however, it has to start from the start every time, so for example, yesterday, I ran the same bundle install command about 50 times in a row and even the next day haven't managed to run it successfully.
I added --retry=100 onto the end of the bundle install command-line, but what appears to happen when the outage hits is, the first attempt fails, and then the 99 retries are all done immediately after it, and also fail. Setting it sufficiently high might be a viable workaround, I just haven't found how high that number has to be yet.
What Bundler should be doing here is some kind of exponential backoff, with retries being further and further apart.
Did you try upgrading rubygems & bundler?
No. However I can just look at the current retry code to see that upgrading will not help:
https://github.com/rubygems/rubygems/blob/master/bundler/lib/bundler/retry.rb#L26-L54
Post steps to reproduce the problem
- Be on an extremely flaky network
- Have a project with 100 or so gem dependencies
- Be on a machine where those gems aren't already installed (e.g. inside a fresh Docker image you're trying to build)
- Run
bundle installand watch it fail
Which command did you run?
bundle install --retry=100
What were you expecting to happen?
The retries should be performed further and further apart in time, so that they have a chance of actually helping.
What actually happened?
The retries are all done at once and you get 100 lines of output showing the retries all in one go, followed by the failure.