Use cache docker image for gitlab-ci - docker

I was wondering is it possible to use cached docker images in gitlab registry for gitlab-ci?
for example, I want to use node:16.3.0-alpine docker image, can I cache it in my gitlab registry and pull it from that and speed up my gitlab ci instead of pulling it from docker hub?

Yes, GitLab's dependency proxy features allow you to configure GitLab as a "pull through cache". This is also beneficial for working around rate limits of upstream sources like dockerhub.
It should be faster in most cases to use the dependency proxy, but not necessarily so. It's possible that dockerhub can be more performant than a small self-hosted server, for example. GitLab runners are also remote with respect to the registry and not necessarily any "closer" to the GitLab registry than any other registry over the internet. So, keep that in mind.
As a side note, the absolute fastest way to retrieve cached images is to self-host your GitLab runners and hold images directly on the host. That way, when jobs start, if the image already exists on the host, the job will start immediately because it does not need to pull the image (depending on your pull configuration). (that is, assuming you're using images in the image: declaration for your job)

I'm using a corporate Gitlab instance where for some reason the Dependency Proxy feature has been disabled. The other option you have is to create a new Docker image on your local machine, then push it into the Container Registry of your personal Gitlab project.
# First create a one-line Dockerfile containing "FROM node:16.3.0-alpine"
docker pull node:16.3.0-alpine
docker build . -t registry.example.com/group/project/image
docker login registry.example.com -u <username> -p <token>
docker push registry.example.com/group/project/image
where the image tag should be constructed based on the example given on your project's private Container Registry page.
Now in your CI job, you just change image: node:16.3.0-alpine to image: registry.example.com/group/project/image. You may have to run the docker login command (using a deploy token for credentials, see Settings -> Repository) in the before_script section -- I think maybe newer versions of Gitlab will have the runner authenticate to the private Container Registry using system credentials, but that could vary depending on how it's configured.

Related

Docker: get list of all the registries configured on a host

Can docker be connected to more than one registry at a time and how to figure out which registries it is currently connected too?
$ docker help | fgrep registr
login Log in to a Docker registry
logout Log out from a Docker registry
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
As you can see, there is no option to list the registries. I did find
a way by running:
$ docker system info | fgrep -i registr
Registry: https://index.docker.io/v1/
So... one regsitry at a time only? It is not like apt where one can point to more than one source? Anybody can point me to some good documentation about docker and registries?
Oddly, I search the web to no vail.
Aside from docker login, Docker isn't "connected to a registry" per se. Registry names are part of the image name, and Docker will connect to a registry server if it needs to pull an image.
As a specific example, the official Docker image for Elasticsearch is on a non-default registry run by Elastic. The example in that documentation is
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.17.0
# ^^^^^^^^^^^^^^^^^
# registry host name
You don't need to otherwise configure your system to connect to that registry, download an index, or anything else. In fact, you don't even need this docker pull command; if you directly docker run the image, Docker will download it if it doesn't have a copy locally.
The default registry is Docker Hub, docker.io, and this cannot be changed.
There are several alternate registries out there. The various public-cloud providers each have their own, and there are also several free-standing image registries. Each has its own instructions on how to set it up. You always need to include the registry name as part of the image name. The Google Container Registry has a simple name syntax, for example, so if you use GCR then you can
# build an image locally, labeled to be stored in GCR
# (this step does not contact or use GCR at all)
docker build gcr.io/my-name/my-image:tag
# authenticate to the registry
# (normally GCR has a Google-specific login sequence)
docker login https://gcr.io
# push the image
docker push gcr.io/my-name/my-image:tag
# run the image, pulling it if not present
docker run ... gcr.io/my-name/my-image:tag

How do I docker buildx build into a local "registry" container

I am trying to build a multi-arch image but would like to avoid pushing it to docker hub. I've had a lot of trouble finding out how to control the export options. is there a way to make "--push" push to a registry of my choosing?
Any help is appreciated
Docker provides a container image for a registry server that you may self run even on localhost, see: Deploying a registry server.
There are other servers|services that implement the registry API (see below) but this is a good place to start.
Conventionally, images pushed|pulled default to Docker registry; unless a registry is explicitly specifed, an image e.g. your-image:your-tag defaults to docker.io/my-image:my-tag. In my opinion, it's a good practice to always include this default to be more transparent about this.
If you run Docker's registry image on localhost on the default port 5000, you'll need to take your images with localhost:5000/your-image:your-tag to ensure that when you docker push localhost:5000/your-image:your-tag, the CLI is able to determine your local registry is the intended destination.
Similarly, if you use e.g. Quay registry, images must be prefixed quay.io, Google Artifact Registry, images are prefixed ${REGION}-docker.pkg.dev/${PROJECT}/${REPOSITORY} etc.
IIRC it's not possible to push to Docker's registry (aka dockerhub) without an account so, as long as you ensure you're not logged in, you should not accidentally push images to Docker's registry.
NOTE You only need to use a registry to ease distribution of container images between machines. If you're only interested in local(host) development, you can docker run ... immediately after a successful docker build without any pushing|pulling (beyond interim images, e.g. FROM).

How to pull new docker images and restart docker containers after building docker images on gitlab?

There is an asp.net core api project, with sources in gitlab.
Created gitlab ci/cd pipeline to build docker image and put the image into gitlab docker registry
(thanks to https://medium.com/faun/building-a-docker-image-with-gitlab-ci-and-net-core-8f59681a86c4).
How to update docker containers on my production system after putting the image to gitlab docker registry?
*by update I mean:
docker-compose down && docker pull && docker-compose up
Best way to do this is to use Image puller, lot of open sources are available, or you can write your own on the Shell. There is one here. We use swarm, and we use this hook concept to be triggered from our CI-CD pipeline. Once our build stage is done, we http the hook url, and the docker pulls the updated image. One disadvantage with this is you need a daemon to watch your hook task, that it doesnt crash or go down. So my suggestion is to run this hook task as a docker container with restart-policy as RestartAlways

How can I docker commit azure container instance to azure container registry

We have ansible configured to deploy our various applications on IIS environment. I am trying to create a docker image of deployed applications so that I can just start up containers as we need for testing and otherwise.
I am planning to build on the Windows IIS image, start the container on azure, run our ansible to install everything on the server, then save the image on container.
I cannot find any documentation on how I can docker commit the container image into our private azure container registry.
Is it possible?
If you have an existing Docker registry in azure you should be able to use the az acr login --name myregistry command to authenticate to it https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli. Make sure you have a registry created for the container image you want to push up.
Next, you can run the container in azure and do all the installation you want. SSH or RDP into the instance in Azure that is running this container. Now run docker ps and find the container id for the correct container. Next, use docker commit <container id> myregistry.azurecr.io/samples/nginx.
Then, just docker push myregistry.azurecr.io/samples/nginx
Also not sure what your use case is, but starting a container in order to modify and commit it in that way seems like an atypical use case for Docker since the build isn't reproducible via the Dockerfile. Looks like there are ways to replace Dockerfiles using Ansible playbooks with something like ansible-containers https://docs.ansible.com/ansible-container/ so you might want to take a look at that(I've never used this tool).

How to run container in a remote docker host with Jenkins

I have two servers:
Server A: Build server with Jenkins and Docker installed.
Server B: Production server with Docker installed.
I want to build a Docker image in Server A, and then run the corresponding container in Server B. The question is then:
What's the recommended way of running a container in Server B from Server A, once Jenkins is done with the docker build? Do I have to push the image to Docker hub to pull it in Server B, or can I somehow transfer the image directly?
I'm really not looking for specific Jenkins plugins or stuff, but rather, from a security and architecture standpoint, what's the best approach to accomplish this?
I've read a ton of posts and SO answers about this and have come to realize that there are plenty of ways to do it, but I'm still unsure what's the ultimate, most common way to do this. I've seen these alternatives:
Using docker-machine
Using Docker Restful Remote API
Using plain ssh root#server.b "docker run ..."
Using Docker Swarm (I'm super noob so I'm still unsure if this is even an option for my use case)
Edit:
I run Servers A and B in Digital Ocean.
Docker image can be saved to a regular tar archive:
docker image save -o <FILE> <IMAGE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_save/
Then scp this tar archive to another host, and run docker load to load the image:
docker image load -i <FILE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_load/
This save-scp-load method is rarely used. The common approach is to set up a private Docker registry behind your firewall. And push images to or pull from that private registry. This doc describes how to deploy a container registry. Or you can choose registry service provided by a third party, such as Gitlab's container registry.
When using Docker repositories, you only push/pull the layers which have been changed.
You can use Docker REST API. Jenkins HTTP Request plugin can be used to make HTTP requests. You can run Docker commands directly on a remote Docker host setting the DOCKER_HOST environment variable. To export an the environment variable to the current shell:
export DOCKER_HOST="tcp://your-remote-server.org:2375"
Please be aware of the security concerns when allowing TCP traffic. More info.
Another method is to use SSH Agent Plugin in Jenkins.

Resources