Docker hub rate limits and Gitlab containers proxy
Everyone noticed it, CI is failing way to often. I haven't found the exact answer to why we are hitting rate limits, but this mainly shows that the current setup is too fragile.
We previously hit limits on docker hub because we were pulling directly from docker.io with no authentication.
I worked on a MR to use Gitlab's container dependency proxy, so the proxy would pull from docker, also with no authentication.
The dependency proxy seem also to be hitting the rate limit from docker.io.
I've looked online, and Gitlab's dependency proxy seem not to be the right answer to our problem:
- The proxy is per group and not per Gitlab instance, this is inefficient.
- It seem like the proxy doesn't support being authenticated when pulling images from docker.io.
- More generally, I think the dependency proxy is not mature enough.
I'd like to revert the work I made and pick one of the following solutions:
- add docker.io authentication to the gitlab-runner/docker client using a public read-only token. Probably by creating a dedicated user (say
funkwhale-ci
) that is part of funkwhale docker team organization. We would create a token for each of the runners that are used by the project, and we should be able to pull 6000 times per day without hitting any limit.
We can then document the requirements and configurations to use for a runner to be used globally by the funkwhale devs.
- setup a mirror and configure the runners/docker clients to pull from the mirrors instead of from docker.io directly: https://docs.docker.com/registry/recipes/mirror/
I lean towards the first option.
cc @funkwhale/development @funkwhale/infrastructure