How to login docker account in Gitlab-ci - docker

I have subscribed for a Pro plan of docker account to increase rate limit in my self hosted Gitlab CI jobs. Then successfully logged-in using this command on the server:
$ sudo docker login -u user -p *******
This is my .gitlab-ci.yml file:
image: edbizarro/gitlab-ci-pipeline-php:7.3-alpine
unittest:
stage: testing
services:
- mysql:latest
script:
- ./vendor/bin/phpunit --colors --stop-on-failure
But when jobs get started, I'm still getting this error:
Running with gitlab-runner 13.6.0 (8fa89735)
on fafa-group-runner n7oiBzAk
Preparing the "docker" executor
30:53
Using Docker executor with image edbizarro/gitlab-ci-pipeline-php:7.3-alpine ...
Starting service mysql:latest ...
Pulling docker image mysql:latest ...
ERROR: Preparation failed: Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit (docker.go:142:4s)
Am I missing something?

You performed the docker login as the root user on the host. However the images are being pulled by the GitLab runner, which will be another user, possibly containerized.
The instructions for configuring runner registry credentials has several options, including setting DOCKER_AUTH_CONFIG in either the project's .gitlab-ci.yml or runner's config.toml. That variable contains the content of the ~/.docker/config.json with the registry credentials inside.

Check also GitLab 13.9 (February 2021)
Automatically authenticate when using the Dependency Proxy
By proxying and caching container images from Docker Hub, the Dependency Proxy helps you to improve the performance of your pipelines.
Even though the proxy is intended to be heavily used with CI/CD, to use the feature, you had to add your credentials to the DOCKER_AUTH_CONFIG CI/CD variable or manually run docker login in your pipeline. These solutions worked fine, but when you consider how many .gitlab-ci.yml files that you need to update, it would be better if the GitLab Runner could automatically authenticate for you.
Since the Runner is already able to automatically authenticate with the integrated GitLab Container Registry, we were able to leverage that functionality to help you automatically authenticate with the Dependency Proxy.
Now it’s easier to use the Dependency Proxy to proxy and cache your container images from Docker Hub and start having faster, more reliable builds.
See Documentation and Issue.

Related

Use cache docker image for gitlab-ci

I was wondering is it possible to use cached docker images in gitlab registry for gitlab-ci?
for example, I want to use node:16.3.0-alpine docker image, can I cache it in my gitlab registry and pull it from that and speed up my gitlab ci instead of pulling it from docker hub?
Yes, GitLab's dependency proxy features allow you to configure GitLab as a "pull through cache". This is also beneficial for working around rate limits of upstream sources like dockerhub.
It should be faster in most cases to use the dependency proxy, but not necessarily so. It's possible that dockerhub can be more performant than a small self-hosted server, for example. GitLab runners are also remote with respect to the registry and not necessarily any "closer" to the GitLab registry than any other registry over the internet. So, keep that in mind.
As a side note, the absolute fastest way to retrieve cached images is to self-host your GitLab runners and hold images directly on the host. That way, when jobs start, if the image already exists on the host, the job will start immediately because it does not need to pull the image (depending on your pull configuration). (that is, assuming you're using images in the image: declaration for your job)
I'm using a corporate Gitlab instance where for some reason the Dependency Proxy feature has been disabled. The other option you have is to create a new Docker image on your local machine, then push it into the Container Registry of your personal Gitlab project.
# First create a one-line Dockerfile containing "FROM node:16.3.0-alpine"
docker pull node:16.3.0-alpine
docker build . -t registry.example.com/group/project/image
docker login registry.example.com -u <username> -p <token>
docker push registry.example.com/group/project/image
where the image tag should be constructed based on the example given on your project's private Container Registry page.
Now in your CI job, you just change image: node:16.3.0-alpine to image: registry.example.com/group/project/image. You may have to run the docker login command (using a deploy token for credentials, see Settings -> Repository) in the before_script section -- I think maybe newer versions of Gitlab will have the runner authenticate to the private Container Registry using system credentials, but that could vary depending on how it's configured.

CD with GitLab, docker and docker private registry

we need to automate the process of deployment. Let me point out the stack we use.
We have our own GitLab CE instance and private docker registry. On production server, application is run in container. After every master commit, GitLab CI builds the image with code in it, sends it to docker registry and this is where automation ends.
Deployment on production server could be performed by a few steps - stopping current application container, pulling newer one and run it.
What is the best way to automate this process?
I read about a couple of solutions (but I believe there is much more)
docker private registry pings to a production server that does all the above steps itself (script on production machine managed by eg. supervisor or something similar)
using docker machine to remotely manage run containers
What is the preferred way? Or you can recommend something else?
No need to use tools like swarm, kubernetes, etc. It's quite simple application. Thanks in advance.
How about install Gitlab-ci runner on your production machine? And perform a job after the push to registry on master called deploy and pin it to that machine using Gitlab CI tags.
The job simply pulls the image from the registry and restarts your service or whatever you have in place.
Something like:
deploy-job:
stage: deploy
tags:
- production
script:
- docker login myprivateregistry.com -u $SECRET_USER -p $SECRET_PASS
- docker pull $CI_REGISTRY_IMAGE:latest
- docker-compose down
- docker-compose up -d
I can think of four solutions
use watchtower on production server https://github.com/v2tec/watchtower
run a webhook server which is requests by your CI after pushing the image to the registry. https://github.com/adnanh/webhook
as already mentioned, run the CI on production too which finaly triggers your update commands.
enable docker api and update the container by requesting it from the CI

docker stack deploy results in "No such image error"

I am using docker swarm and would like to deploy a service with docker-compose. My service uses a custom image called myuser/myrepo:mytag that I successfully deploy to Docker-Hub to a private repository.
My docker-compose looks like this:
version: "3.3"
services:
myservice:
image: myuser/myrepo:mytag
ports:
- "8080:8080"
Before executing, I successfully pulled the image with: docker pull myuser/myrepo:mytag
When I run docker stack deploy -c docker-compose.yml myapp I always receive the error: "No such image: myuser/myrepo:mytag".
Interestingly, running the same file using only: docker-compose up (i.e. without swarm mode) everything works fine and the service starts up.
I really don't understand why this is failing?
I've already tried cleaning up docker with docker system prune and then repull my image, no success.
Already found the solution.
My image is hosted on a private repository.
Besides the swarm manager (where I executed the commands), I had a running swarm worker.
When I ran docker stack deploy -c docker-compose.yml myapp docker deployed the service to the worker node (not the manager node as I thought).
At the worker node, docker had no credentials to pull the image from the private repository.
Hence, to fix this either pass the flag --with-registry-auth (which pushes the credentials for the repository to the worker node) or make sure that the service is deployed to a node where the image is present.
See: https://docs.docker.com/engine/reference/commandline/deploy/
I want to add another scenario that leads to the same outcome (error message) so that people won't bang their heads against the wall.
Another possibility is that you are trying to deploy the image with the insecure registry but forget to edit daemon.json on the server pulling the image.
If that is the case, lets this answer act as a reminder; and save you some time.
I had similar issue on mac when behind the corporate firewall.
I was able to resolve only after connecting directly to internet.
Just to update, while I am on VPN, I am able to access the internet without any proxy settings, and am able to download (docker) images just fine with docker run. Issue is only with docker-compose.
I did try changing the nameserver to 8.8.8.8 in resolv.conf in my VMs, but issue was not resolved.
For me I struggled with an image I had deployed to a new registry I configured in my swarm. I was updating the stack using Portainer.
I configured all the necessary certificates and logins on all the nodes and verified I had uploaded the image using the following commands:
curl -X GET https://myregistry:5000/v2/_catalog
curl -X GET https://myregistry:5000/v2/{image}/tags/list
No matter what I tried I always had the "No such image" error displayed on the service instances.
In a last ditch attempt I created a service (without the compose file) using exactly the same URL for my image as I had previously and it worked, i.e. docker found the image and started the service! Further attempts using the compose file then worked properly for this and all other new images.
Weird.

Artifactory as docker Registry - docker-remote-cache stays empty

i finally managed to get Artifactory 5.1 running as a docker Registry with nginx in front as Reverse Proxy using the subdomain method with a wildcard SSL certificate.
I have the predefinded set of docker repositories configured:
docker-local - repo
docker-remote - remote-repo
docker - virtual repo
I'm able to login with docker cli and i also can push and pull images to and from docker. as mentioned in JFrog Docs.
I think my "docker-remote" doesn't work - it stays at 0 byte with 0 artifacts in it.
If i pull something that isn't in my local repo i would have guessed that it is pulled from docker.io and cached in docker-remote but it seems its simply pulled from docker.io - thats it.
Do i have to configure something? Did i miss something or do i have to configure Replication ?
Any suggestions ?
To configure your Docker CLI to use Artifactory as its registry, follow the instructions here. Make sure to perform the steps listed under "Configuring Your Docker Client".
There are a couple of things you can do to check whether you docker CLI is using Artifactory as its registry:
Use the docker info command to see what registry is configured
Look at the Artifactory request and access logs and look for requests from the Docker CLI
Images fetched from docker.io should be present in the remote repository
Make sure the images you are pulling are not stored in the local Docker cache

Gitlab Continuous Integration on Docker

I have a Gitlab server running on a Docker container: gitlab docker
On Gitlab there is a project with a simple Makefile that runs pdflatex to build pfd file.
On the Docker container I installed texlive and make, I also installed docker runner, command:
curl -sSL https://get.docker.com/ | sh
the .gitlab-ci.yml looks like follow:
.build:
script: &build_script
- make
build:
stage: test
tags:
- Documentation Build
script: *build
The job is stuck running and a message is shown:
This build is stuck, because the project doesn't have any runners online assigned to it
any idea?
The top comment on your link is spot on:
"Gitlab is good, but this container is absolutely bonkers."
Secondly looking at gitlab's own advice you should not be using this container on windows, ever.
If you want to use Gitlab-CI from a Gitlab Server, you should actually be installing a proper Gitlab server instance on a proper Supported Linux VM, with Omnibus, and should not attempt to use this container for a purpose it is manifestly unfit for: real production way to run Gitlab.
Gitlab-omnibus contains:
a persistent (not stateless!) data tier powered by postgres.
a chat server that's entire point in existing is to be a persistent log of your team chat.
not one, but a series of server processes that work together to give you gitlab server functionality and web admin/management frontend, in a design that does not seem ideal to me to be run in production inside docker.
an integrated CI build manager that is itself a Docker container manager. Your docker instance is going to contain a cache of other docker instances.
That this container was built by Gitlab itself is no indication you should actually use it for anything other than as a test/toy or for what Gitlab themselves actually use it for, which is probably to let people spin up Gitlab nightly builds, probably via kubernetes.
I think you're slightly confused here. Judging by this comment:
On the Docker container I installed texlive and make, I also installed
docker runner, command:
curl -sSL https://get.docker.com/ | sh
It seems you've installed docker inside docker and not actually installed any runners? This won't work if that's the case. The steps to get this running are:
Deploy a new gitlab runner. The quickest way to do this will be to deploy another docker container with the gitlab runner docker image. You can't run a runner inside the docker container you've deployed gitlab in. You'll need to make sure you select an executor (I suggest using the shell executor to get you started) and then you need to register the runner. There is more information about how to do this here. What isn't detailed here is that if you're using docker for gitlab and docker for gitlab-runner, you'll need to link the containers or set up a docker network so they can communicate with each other
Once you've deployed and registered the runner with gitlab, you will see it appear in http(s)://your-gitlab-server/admin/runners - from here you'll need to assign it to a project. You can also make it as "Shared" runner which will execute jobs from all projects.
Finally, add the .gitlab-ci.yml as you already have, and the build will work as expected.
Maybe you've set the wrong tags like me. Make sure the tag name with your available runner.
tags
- Documentation Build # tags is used to select specific Runners from the list of all Runners that are allowed to run this project.
see: https://docs.gitlab.com/ee/ci/yaml/#tags

Resources