-
Notifications
You must be signed in to change notification settings - Fork 14
Description
we staged M5, then before releasing, decided to rebuild and stage it again.
it isn't the first time that's ever happened, but it is the first time it's happened on a release I captained, iirc, and it's also the first time it's happened since we moved the release jobs off Jenkins and onto Travis-CI.
anyway: I didn't drop the old staging repos before creating the new ones. then when the scala-dist job ran (triggered by the scala/scala release job), it went to https://oss.sonatype.org/content/repositories/staging/ and despite newer staging repos existing, https://oss.sonatype.org/content/repositories/staging/ still had the old files
so the right files ended up on Maven Central (because we correctly dropped the first-round staging repos and only hit "release" on the second-round staging repos)... but scala-lang.org ended up with the old files
Activity
SethTisue commentedon Aug 28, 2018
I wonder if dropping the old repos first is even enough to ensure that https://oss.sonatype.org/content/repositories/staging/ gets the right files. even if I'd done it in the better order, would that even have prevented this? I don't know for sure.
SethTisue commentedon Aug 28, 2018
@adriaanm suggests that at minimum, we should modify the scala-dist scripts to print more diagnostic information:
ashawley commentedon Aug 28, 2018
Coincidentally, producing shasums was raised by a user about the web site in scala/scala-lang#463
SethTisue commentedon Aug 28, 2018
we're also having a problem now with S3 where the dist bundles were overwritten but the new bundles aren't visible yet everywhere. so when I do
wget -nv -O scala-2.13.0-M5.tgz https://downloads.lightbend.com/scala/2.13.0-M5/scala-2.13.0-M5.tgz
in San Francisco it's 17721111 bytes which is the new size, but when Lukas does it in Switzerland he gets 17712216 bytes which is the old sizethe Travis-CI job which copies the dists to chara (https://travis-ci.org/scala/scala-dist/builds/421624348) is still seeing the old files, so chara has the old files, and then https://scala-webapps.epfl.ch/jenkins/view/All/job/production_scala-lang.org-scala-dist-archive-sync/ also syncs the old files to https://www.scala-lang.org/files/archive/
not sure how long we have to wait before everyone sees the new files
SethTisue commentedon Aug 28, 2018
a bit of discussion on Gitter:
martijnhoekstra commentedon Aug 29, 2018
The release checklist has the hard point of no return, from where you can't "fix" the release number anymore and have to release a newer version number, but if the result of fixing things up is so much trouble, maybe re-weighing it against the benefit of skipping a release.
Skipping a milestone number doesn't strike me as a particularly big deal (some re-tagging of github tickets and issues targeting the next milestone), and if it makes things easier on everyone to call a skip and release under the next milestone instead, maybe doing that is a good idea.
That may be different for a release number, and maybe even for an RC.
In other words, consider separating when you're have to skip/DOA a milestone or RC because there is no other way, and the point where you decide skipping a milestone or RC is the right move because it's less of a pain.
The amount of pain shown here indicates that the point where you may be willing to skip a milestone could be as early as after triggering the build with the version suffix on travis.
adriaanm commentedon Aug 29, 2018
True, it's just a number, but I think we should stop distinguishing hard and soft points of no return, and just make it "point of no return" (as it used to be -- redoing tags really isn't an option anymore than redoing maven artifacts). We don't really need a distro for testing of a staged release -- everyone will consume the artifacts from the sonatype staging resolver. (If somehow the dist part fails, we will have to burn the release number.)
lrytz commentedon Aug 29, 2018
Some more conclusions from internal chat
archives
/update-api
sub-jobs). There might/will be cases where we have to do that (spurious failures, need to adjust some script that doesn't run on nightlies, ...). This probably means that we have to uplaod the artifacts somewhere for downstream jobs (we used s3 for that until now).adriaanm commentedon Aug 29, 2018
I've put the template here: https://github.com/scala/scala-dev/blob/scala-dev/notes/releases/template.md. Let's evolve it there, hopefully more script-like each time.