Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a blunt instrument. A couple of questions:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The software in the list is the pre-installed software in the runners, at least as I understand it.
That software is already present when the runners bootstrap, because it's pre-installed in the images, so I don't think we cache it (neither does it make sense for GitHub to cache it).*
What I think is happening is two fold:
I think this can go in for now, as it doesn't seem to be breaking anything, and I think it's an improvement from what I can see (I remember inspecting the coverage jobs last week and they were about 900MiB, so unless nothing else has changed this already has some impact).
But unless we trim the amount of jobs we have significantly (or narrow down the scope of caches on jobs to only run on say Pull Requests and not merges or releases), I'm afraid we will still continue to be plagued by such issues.
.ccache
(https://github.com/diffblue/cbmc/blob/develop/.github/workflows/pull-request-checks.yaml#L932), so unless we expand the scope of that in a way that I don't understand, we just cache build artefacts.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I do wonder whether there is some confusion here: there is the cache that we control (I think we just put ccache results in there), and then there may be Docker image caches. The latter we don't really care about, that's just how GitHub may choose to build their images (I'm not even sure they use Docker images?). What we do get is VMs with a certain amount of disk space. Some of that disk space is consumed by pre-installed software. This software may come in via the latter kind of cache, but how it ends up in the image doesn't really matter to us - all that matters is that we end up with a disk image with the following mount points (and their available space):
This means that only 26 GB are available for our source code, build artefacts, and any software that we still may need to install. With builds with debug symbols enabled and the coverage logging that takes place those 26 GB are no longer sufficient, and we had to free up additional space before properly starting our job's work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh, my apologies, I misunderstood.
It seems like you're talking about the actual disk space inside the runner. In this case, yeah, I agree.
I thought originally this was going to affect the cache utilisation (of which we're already using
16GiB22GiB of the allocated 10GiB, and we're approaching the hard limit).