Skip to content

Use hard memory limits for the daemon, not soft limits #1368

@jyn514

Description

@jyn514
Member

Right now, if we run out of memory, the server becomes more and more unresponsive until it stops being usable. If we're lucky, the OOM killer eventually notices and kills the process; if not, we have to do a hard-restart of the instance. Instead of using the default soft limit of "when the OOM killer notices", we should have a hard limit - that way the server restarts immediately instead of being down until someone has time to investigate.

systemd has a way to control this with MemoryMax=XXX (https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html). Unfortunately, because docker does shenanigans with cgroups, it doesn't propagate into docker containers. However, since we already limit the memory of docker containers to 3 GB by default, we can just set the MemoryMax 3 GB lower than it would be otherwise. Note that for crates with a raised memory limit, we'll avoid an OOM through #1279, which just won't start the build.

We have about 15.5 GB of memory on the prod instance, so I'd suggest a 12 GB hard limit.

Activity

added
A-adminArea: Administration of the production docs.rs server
on Apr 18, 2021
jyn514

jyn514 commented on Apr 18, 2021

@jyn514
MemberAuthor

Thanks @pcwalton for suggesting this on twitter and @kprotty for suggesting this in the community discord!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    A-adminArea: Administration of the production docs.rs server

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @jyn514

        Issue actions

          Use hard memory limits for the daemon, not soft limits · Issue #1368 · rust-lang/docs.rs