I am looking for an open source solution to sync several docker registries. Could anybody give me some hints about this?
The easiest way to set up a docker registry is using the official docker registry. This allows you to easily run a registry server with a configurable storage backend. As others have mentioned you can use S3 or Google Cloud storage. (I have personally used Google Cloud storage and have not run into any problems).
I would also check out this digital ocean post about setting up a docker registry: How to setup a docker registry.
Since you are interested in clustering, all you would need to do at this point is setup multiple registry servers with the same bucket as a storage backend. Then put a load balancer such as haproxy or nginx in front of them. This will give you the fault tolerance and load balancing that you are looking for.
Related
I'm wondering if the following concept is possible:
I've got a docker registry with images and I've got few servers that I want them to be able to pull images but not directly from the docker registry. I would like to have another server which will be the docker proxy and this server will be the only one which will have access to the docker registry, the other servers will use this server to download images.
Is this possible?
Sounds like you need a reverse proxy. Docker Docs has some articles on the subject, for nginx and/or apache.
https://docs.docker.com/registry/recipes/apache/
https://docs.docker.com/registry/recipes/nginx/
This may be used for authentication, rate limiting, proxy buffering and many other features included in nginx/apache.
This setup is not uncommon, and not exclusively for hosting a Docker Registry.
I am trying to find an effective way to use the docker remote API in a secure way.
I have a docker daemon running in a remote host, and a docker client on a different machine. I need my solution to not be client/server OS dependent, so that it would be relevant to any machine with a docker client/daemon etc.
So far, the only way I found to do such a thing is to create certs on a Linux machine with openssl and copy the certs to the client/server manually, as in this example:
https://docs.docker.com/engine/security/https/
and then configure docker on both sides to use the certificates for encryption and authentication.
This method is rather clunky in my opinion, because some times it's a problem to copy files and put them on each machine I want to use remote API from.
I am looking for something more elegant.
Another solution I've found is using a proxy for basic HTTP authentication, but in this method the traffic is not encrypted and it is not really secure that way.
Does anyone have a suggestion for a different solution or for a way to improve one of the above?
Your favorite system automation tool (Chef, SaltStack, Ansible) can probably directly manage the running Docker containers on a remote host, without opening another root-equivalent network path. There are Docker-oriented clustering tools (Docker Swarm, Nomad, Kubernetes, AWS ECS) that can run a container locally or remotely, but you have less control over where exactly (you frequently don't actually care) and they tend to take over the machines they're running on.
If I really had to manage systems this way I'd probably use some sort of centralized storage to keep the TLS client keys, most likely Vault, which has the property of storing the keys encrypted, requiring some level of authentication to retrieve them, and being able to access-control them. You could write a shell function like this (untested):
dockerHost() {
mkdir -p "$HOME/.docker/$1"
JSON=$(vault kv get -format=json "secret/docker/$1")
for f in ca.pem cert.pem key.pem; do
echo "$JSON" | jq ".data.data.[\"$f\"]" > "$HOME/.docker/$1/$f"
done
export DOCKER_HOST="https://$1:2376"
export DOCKER_CERT_PATH="$HOME/.docker/$1"
}
While your question makes clear you understand this, it bears repeating: do not enable unauthenticated remote access to the Docker daemon, since it is trivial to take over a host with unrestricted root access if you can access the socket at all.
Based on your comments, I would suggest you go with Ansible if you don't need the swarm functionality and require only single host support. Ansible only requires SSH access which you probably already have available.
It's very easy to use an existing service that's defined in Docker Compose or you can just invoke your shell scripts in Ansible. No need to expose the Docker daemon to the external world.
A very simple example file (playbook.yml)
- hosts: all
tasks:
- name: setup container
docker_container:
name: helloworld
image: hello-world
Running the playbook
ansible-playbook -i username#mysshhost.com, playbook.yml
Ansible provides pretty much all of the functionality you need to interact with Docker via its module system:
docker_service
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm. Supports compose versions 1 and 2.
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_facts
Inspects one or more images in the Docker host’s image cache, providing the information as facts for making decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.
I'm just curious if there's a chance to move out the gitlab registry to a different host? I can see, that it is possible to move the storage to another place, but what about the service itself?
I just want to run the registry on another place but want to make use of the authentication and the UI features of gitlab to manage it. May be this is not possible, but please someone shed some light on it.
Yup, you can disable the bundled registry and run a registry yourself, while still using GitLab's authentication and UI.
GitLab's built-in registry is basically just a deployment of the Docker Registry. You can run it, or another compatible registry, and then configure GitLab to use it as described here: Disable Container Registry but use GitLab as an auth endpoint.
Is it possible, to pull private images from Docker Hub to a Google Cloud Kubernetes cluster?
Is this recommended, or do I need to push my private images also to Google Cloud?
I read the documentation, but I found nothing that could explain me this clearly. It seems that it is possible, but I don´t know if it's recommended.
There is no restriction to use any registry you want. If you just use the image name, (e.g., image: nginx) in pod specification, the image will be pulled from public docker hub registry with tag assumed as :latest
As mentioned in the Kubernetes documentation:
The image property of a container supports the same syntax as the
docker command does, including private registries and tags. Private
registries may require keys to read images from them.
Using Google Container Registry
Kubernetes has native support for the Google Container Registry (GCR), when running on Google
Compute Engine (GCE). If you are running your cluster on GCE or Google
Kubernetes Engine, simply use the full image name (e.g.
gcr.io/my_project/image:tag). All pods in a cluster will have read
access to images in this registry.
Using AWS EC2 Container Registry
Kubernetes has native support for the AWS EC2 Container Registry, when nodes are AWS EC2 instances.
Simply use the full image name (e.g.
ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag) in the Pod
definition. All users of the cluster who can create pods will be able
to run pods that use any of the images in the ECR registry.
Using Azure Container Registry (ACR)
When using Azure Container Registry you can authenticate using either an admin user or a
service principal. In either case, authentication is done via standard
Docker authentication. These instructions assume the azure-cli command
line tool.
You first need to create a registry and generate credentials, complete
documentation for this can be found in the Azure container registry
documentation.
Configuring Nodes to Authenticate to a Private Repository
Here are the recommended steps to configuring your nodes to use a private
registry. In this example, run these on your desktop/laptop:
Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json.
View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
Get a list of your nodes, for example:
if you want the names: nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')
if you want to get the IPs: nodes=$(kubectl get nodes -o jsonpath='{range
.items[*].status.addresses[?(#.type=="ExternalIP")]}{.address}
{end}')
Copy your local .docker/config.json to the home directory of root on each node.
for example: for n in $nodes; do scp ~/.docker/config.json root#$n:/root/.docker/config.json; done
Use cases:
There are a number of solutions for configuring private registries.
Here are some common use cases and suggested solutions.
Cluster running only non-proprietary (e.g. open-source) images. No need to hide images.
Use public images on the Docker hub.
No configuration required.
On GCE/Google Kubernetes Engine, a local mirror is automatically used for improved speed and availability.
Cluster running some proprietary images which should be hidden to those outside the company, but visible to all cluster users.
Use a hosted private Docker registry.
It may be hosted on the Docker Hub, or elsewhere.
Manually configure .docker/config.json on each node as described above.
Or, run an internal private registry behind your firewall with open read access.
No Kubernetes configuration is required.
Or, when on GCE/Google Kubernetes Engine, use the project’s Google Container Registry.
It will work better with cluster autoscaling than manual node configuration.
Or, on a cluster where changing the node configuration is inconvenient, use imagePullSecrets.
Cluster with a proprietary images, a few of which require stricter access control.
Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods potentially have access to all images.
Move sensitive data into a “Secret” resource, instead of packaging it in an image.
A multi-tenant cluster where each tenant needs own private registry.
Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods of all tenants potentially have access to all
images.
Run a private registry with authorization required.
Generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.
The tenant adds that secret to imagePullSecrets of each namespace.
Consider reading the Pull an Image from a Private Registry document if you decide to use a private registry.
There are 3 types of registries:
Public (Docker Hub, Docker Cloud, Quay, etc.)
Private: This would be a registry running on your local network. An example would be to run a docker container with a registry image.
Restricted: That is one registry that needs some credentials to validate. Google Container Registry (GCR) in an example.
As you are well saying, in a public registry, such as Docker Hub, you can have private images.
Private and Restricted registries are more secure obviously, as one of them is not even exposed to internet (ideally), and the other one needs credentials.
I guess you can achieve an acceptable security level with any of them. So, it is matter of choice. If you feel your application is critical, and you don't want to run any risk, you should have it in GCR, or in a private registry.
If you feel like it is important, but not critical, you could have it in any public repository, making it private. This will give a layer of security.
I'm searching for a replication solution for a private docker registry from azure to aws (managed container service aws ecr, or my own private hosted docker registry if needed, hopefully not). is there a builtin docker registry option to do that? or other known solution? or perhaps another registry provider that is hosted in AWS - east? my google foo failed me this time.
Please note: This is a question about replication, not about one time migration. Our main registry would still be in azure container registry
There is no native replication options across ACR and ECR. You might be able to integrate with the webhooks and replicate delete/push using the digest and tag to ECR. This might require a custom job (jenkins/VSTS/VM) etc.