Why would someone use a private docker registry when they could just share their dockerfile's in source control and have docker image consumers build directly from the dockerfile with docker build?
To my untrained eye the private docker registry seems to serve the same purpose as source control except it adds complexity because it's been decoupled from the branch of code that you're in and so you (or your CI/CD server more to the point) has to reconstruct which tag to pull.
When you deploy on real machines (say you have a job that deploys on 10 machines), usually you don't really want to rebuild the image over and over again.
Instead you can build image once, store it somewhere (???) then maybe deploy on test environment, run some tests, make sure its a good image indeed (starts, tests pass, etc) and then deploy on production.
So this "somewhere" means that you should have some registry, and if you don't want to use docker hub - you can use private docker registry.
This makes sense from both security (don't publish on cloud if you don't need to)
and performance (moving images between servers in private network is faster).
If you're running kubernetes you can also configure it to pull the images from the docker registry and run pods based on these images as specified in the kubernetes deployment files.
If you're running on AWS you can use docker registry to run services like Fargate or ECS. They will take care of scaling out across machines (pretty much like k8s does) but they still need to take the images from somewhere, well, its a private registry (in this case in the cloud called ECS - Elasctic Container Registry).
Bottom line, in many (simple) cases you can live without it, but in some other cases it comes pretty handy
Related
I'm trying to understand how docker works in production for a scalable symfony application.
Let's say we start with a basic LAMP stack:
Apache container
php container
mysql container
According to my research once our containers are created, we push them to the registry (docker registry).
For production, the orchestrator will take care of creating the PODS (in the case of kubernetes) and will call the images that we have uploaded to the registry.
Did I understand right ? Do we push 3 separate images on the registry?
Let me clarify a couple of things:
push them to the registry (docker registry)
You can push them to the docker registry, however, depending on your company policy this may not be allowed since dockerhub (docker's registry) is hosted somewhere in the internet and not at your company.
For that reason many enterprises deploy their own registry, something like JFrog's artifactory for example.
For production, the orchestrator will take care of creating the PODS (in the case of kubernetes)
Well, you will have to tell the orchestrator what he needs to create, in case of Kubernetes you will need to write a YAML file that describes what you want, then it will take care of the creation.
And having an orchestrator is not only suitable for production. Ideally you would want to have a staging environment as similar to your production environment as possible.
will call the images that we have uploaded to the registry
That may need to be configured. As mentioned above, dockerhub is not the only registry out there but in the end you need to make sure that it is somehow possible to connect to the registry in order to pull the images you pushed.
Do we push 3 separate images on the registry?
You could do that, however, I would not advice you to do so.
As good as containers may have become, they also have downsides.
One of their biggest downsides still are stateful applications (read up on this article to understand the difference between stateful vs stateless).
Although it is possible to have stateful applications (such as MySQL) inside a container and orchestrate it by using something like Kubernetes, it is not advised to do so.
Containers should be ephemeral, something that does not work well with databases for example.
For that reason I would not recommend having your database within a container but instead use a virtual or even a physical machine for it.
Regarding PHP and Apache:
You may have 2 separate images for these two but it honestly is not worth the effort since there are images that already have both of them combined.
The official PHP image has versions that include Apache, better use that and save yourself some maintenance effort.
Lastly I want to say that you cannot simply take everything from a virtual/physical server, put it into a container and expect it to work just as it used to.
My overall goal is to install a self-hosted gitlab-runner that is restricted to use prepared docker images from my own docker registry only.
For that I have a system.d configuration that looks like:
/etc/systemd/system/docker.service.d/allow-private-registry-only.conf
BLOCK_REGISTRY='--block-registry=all'
ADD_REGISTRY='--add-registry=my.private.registry:8080'
By this, docker pull is allowed to pull images from my.private.registry/ only.
After I had managed to get this working, I wanted to clean up my local registry and remove old docker images. It was during that process when I stumbled over a docker image named gitlab/gitlab-runner-helper which presumably is some component used by the gitlab-runner itself and presumably has been pulled from docker.io.
Now I'm wondering if it is even possible/advisable to block images from docker.io when using a gitlab-runner?
Any hints are appreciated!
I feel my sovereign duty to extend the accepted answer (it is great btw), because the word 'handle' basically tells us not so much, it is too abstract. Let me explain the whole flow in far more details:
When the build is about to begin, gitlab-runner creates a docker volume (you can observe it with docker volume ls if you want). This volume will server as a storage for caches and artifacts that you are using during the build.
The second thing - You will have at least 2 containers involved in each stage: gitlab-runner-helper, container and the container created from the image you specified (in .gitlab-ci.yml or in config.toml). What gitlab-runner-helper container does it, essentially, just cloning the remote git repository (that you are building) in the aforementioned docker volume along with caches and artifacts.
It can do it becuase within gitlab-runner-helper image itself are 2 important utilities: git (obviously - to clone the repo) and gitlab-runner-helper binary (this utility can pull and push artifacts, caches)
The gitlab-runner-helper container starts before each stage for a couple of seconds, to pull artifacts and caches, and then terminates. After that the container, created from image that you specified will be launched, ant it will also have this volume (from step 1) attached - this is how it receives artifacts btw.
The rest of the details about the registry from where gitlab-runner-helper get pulled are described by #Nicolas pretty well. I append this comment just for someone, who, perhaps, want to know what exactly means this sneaky 'handle' word.
Hope it helps, have a nice day, my friend!
gitlab-runner-helper image is used by GitLab Runner to handle Git, artifacts, and cache operations for docker, docker+machine or kubernetes executors.
As you prefer pulling an image from a private registry, you can override the helper image. Your configuration could be :
[[runners]]
(...)
executor = "docker"
[runners.docker]
(...)
helper_image = "my.private.registry:8080/gitlab/gitlab-runner-helper:tag"
Please ensure the image is present on your registry or your configuration enable proxying docker hub or registry.gitlab.com. For this last, you need to run at least Gitlab runner version 13.7 and having enabled FF_GITLAB_REGISTRY_HELPER_IMAGE feature flag.
I want to build some docker images in a certain step of my Google Cloud Build, then push them in another step. I'm thinking the CI used doesn't really matter here.
This is because some of the push commands are dependent on some other conditions and I don't want to re-build the images.
I can docker save to some tar in the mounted workspace, then docker load it later. However that's fairly slow. Is there any better strategy? I thought of trying to copy to/from /var/lib/docker, but that seems ill advised.
The key here is doing the docker push from the same host on which you have done the docker build.
The docker build, however, doesn’t need to take place on the CICD build machine itself, because you can point its local docker client to a remote docker host.
To point your docker client to a remote docker host you need to set three environment variables.
On a Linux environment:
DOCKER_HOST=tcp:<IP Address Of Remote Server>:2376
DOCKER_CERT_PATH=/some/path/to/docker/client/certs
DOCKER_TLS_VERIFY=1
This is a very powerful concept that has many uses. One can for example, point to a dev|tst|prod docker swarm manager node. Or, point from Linux to a remote Windows machine and initiate the build of a Windows container. This latter use case might be useful if you have common CICD tooling that implements some proprietary image labeling that you want to re-use also for Windows containers.
The authentication here is mutual SSL/TLS and so there need to be both client and server private/public keys generated with a common CA. This might be a little tricky at first and so you may want to see how it works using docker-machine first using the environment setting shortcuts initially:
https://docs.docker.com/machine/reference/env/
Once you’ve mastered this concept you’ll then need to script the setting of these environment variables in your CICD scripts making client certs available in a secure way.
Given a Windows application running in a Docker Windows Container, and while running changes are made to the Windows registry by the running applications, is there a docker switch/command that allows changes to the Windows Registry to be persisted, so that when the container is restarted the changed values are retained.
As a comparison, file changes can be persisted between container restarts by exposing mount points e.g.
docker volume create externalstore
docker run -v externalstore:\data microsoft/windowsservercore
What is the equivalent feature for Windows Registry?
I think you're after dynamic changes (each start and stop of the container contains different user keys you want to save for the next run), like a roaming profile, rather than a static set of registry settings but I'm writing for static as it's an easier and more likely answer.
It's worth noting the distinction between a container and an image.
Images are static templates.
Containers are started from images and while they can be stopped and restarted, you usually throw them entirely away after each execution with most enterprise designs such as with Kubernetes.
If you wish to run a docker container like a VM (not generally recommended), stopping and starting it, your registry settings should persist between runs.
It's possible to convert a container to an image by using the docker commit command. In this method, you would start the container, make the needed changes, then commit the container to an image. New containers would be started from the new image. While this is possible, it's not really recommended for the same reason that cloning a machine or upgrading an OS is not. You will get extra artifacts (files, settings, logs) that you don't really want in the image. If this is done repeatedly, it'll end up like a bad photocopy.
A better way to make a static change is to build a new image using a dockerfile. You'll need to read up on that (beyond the scope of this answer) but essentially you're writing a docker script that will make a change to an existing docker image and save it to a new image (done with docker build). The advantage of this is that it's cleaner, more repeatable, and each step of the build process is layered. Layers are advantageous for space savings. An image made with a windowsservercore base and application layer, then copied to another machine which already had a copy of the windowsservercore base, would only take up the additional space of the application layer.
If you want to repeatedly create containers and apply consistent settings to them but without building a new image, you could do a couple things:
Mount a volume with a script and set the execution point of the container/image to run that script. The script could import the registry settings and then kick off whatever application you were originally using as the execution point, note that the script would need to be a continuous loop. The MS SQL Developer image is a good example, https://github.com/Microsoft/mssql-docker/tree/master/windows/mssql-server-windows-developer. The script could export the settings you want. Not sure if there's an easy way to detect "shutdown" and have it run at that point, but you could easily set it to run in a loop writing continuously to the mounted volume.
Leverage a control system such as Docker Compose or Kubernetes to handle the setting for you (not sure offhand how practical this is for registry settings)
Have the application set the registry settings
Open ports to the container which allow remote management of the container (not recommended for security reasons)
Mount a volume where the registry files are located in the container (I'm not certain where these are or if this will work correctly)
TL;DR: You should make a new image using a dockerfile for static changes. For dynamic changes, you will probably need to use some clever scripting.
with a new release of our product, we want to move to new technologies(kubernetes) so that we can take advantage of it services. we have a local kubernetes application running in our infra. we have made our applicatons dockerize and now we want to use the images to integrate it with kubernetes to make cluster --pods,
but we are stuck with docker registry as our customer do not want to have any public/private docker repository(registry) where we can upload this images. we have try with (docker save and docker load) but no luck(error: portal-66d9f557bb-dc6kq 0/1 ImagePullBackOff) Is it at all possible to have some filesystem where from we can access this images or any other alternative is welcome if that solves our problems(no private/public repository/registry).
A Docker registry of some sort is all but a requirement to run Kubernetes. Paid Docker Hub supports private images; Google and Amazon both have hosted registry products (GCR and ECR respectively); there are third-party registries; or you can deploy the official registry image locally.
There's an alternative path where you docker save every private image you reference in any Kubernetes pod spec, then docker load it on every single node. This has obvious scale and maintenance problems (whenever anybody updates any image you need to redeploy it by hand to every node). But if you really need to try this, make sure your pod specs specify ImagePullPolicy: Never to avoid trying to fetch things from registries. (If something isn't present on a node the pod will just fail to start up.)
The registry is "just" an open-source (Go) HTTP REST service that implements the Docker registry API, if that helps your deployment story.