As a user, I expect that I don't have to care about caching to speed up
builds for the good of the CI system itself.
Right now there exists a whitelist for docker images, which will not be
remove from the build slot after the build. Instead of that I expect a
clean build environment and that in general all images which I regularly
use are cached in the cluster via e.g. a pull-through-cache .
1) Caching in a build slot is not very effective. CI runs do really-a-lot
of almost identical things in a small time-window (e.g. days). If caching
happens in the build-slot and many slots are present, then the cache
utilization will be very low.
2) Whitelisting docker images extra for a slot where the registry runs in,
is very error prone and since it is not cached across the cluster it is
also very intransparent what the clear benefit for the user is. Especially
when thinking about scaling a CI system, that seems to leak internal
optimizations to the user. Fast builds are twice as important for the CI
system than they are for the users (by default faster and lower utilization
is always better than asking people to optimize on their side).
Just to understand, this is about adding a local dockerhub mirror on the OpenShift instance and configuring all builds to work against it instead of dockerhub.io?
How much time do we estimate it will reduce from job runtime?
Thinking about our CI images it can avoid peaks on the network consumption. Right now you need all of our lanes to have run on each bare-metal machine until all nodes have a cache for each image. They have up to 8gb. It is another measure to reduce the amount of failing tests because of networking issues. Setting it up should be pretty simple. The mirror needs then just to be injected before starting docker.