I'm wondering if the following concept is possible:
I've got a docker registry with images and I've got few servers that I want them to be able to pull images but not directly from the docker registry. I would like to have another server which will be the docker proxy and this server will be the only one which will have access to the docker registry, the other servers will use this server to download images.
Is this possible?
Sounds like you need a reverse proxy. Docker Docs has some articles on the subject, for nginx and/or apache.
https://docs.docker.com/registry/recipes/apache/
https://docs.docker.com/registry/recipes/nginx/
This may be used for authentication, rate limiting, proxy buffering and many other features included in nginx/apache.
This setup is not uncommon, and not exclusively for hosting a Docker Registry.
Related
At my job, we have internal services that can only be reached over a HTTP proxy. One such service is our internal Docker registry. I'm unable to communicate with this registry because my Docker daemon isn't configured to use a HTTP proxy.
If I do configure my Docker daemon to use the company HTTP proxy, I can push/pull images from the internal registry, but I'm now unable to communicate with any other registries. Changing the HTTP proxy environment variables and restarting my entire Docker daemon several times per day is a massive hassle and waste of time.
Basically what I need to do is configure Docker to use a HTTP proxy to communicate with one registry, but not all the other ones.
Is it possible to configure Docker this way, or is it all or nothing?
We're using docker for our project. We've a monitoring service (for our native application) which is running on Docker.
Currently there is no user management for this monitoring service. Is there any way we can add user management from Dockerfile?
Please note that I'm not looking docker container user management.
In simple words functionality that I'm looking for is:
Add any user and password in dockerfile.
While accessing external IP, same user and password must be provided to view running monitoring service.
From your few information about your setup, I would say the authentification should be handled by your monitoring service. If this was a kind of webapp you could use a simple basic auth as a first step.
Docker doesn't know what's running in your container, and its networking setup is limited to a simple pass-through between containers or from host ports to containers. It's typical to run programs with many different network protocols in Docker (Web servers, MySQL, PostgreSQL, and Redis all have different wire protocols) and there's no way Docker on its own could inject an authorization step.
Many non-trivial Docker setups include some sort of HTTP reverse proxy container (frequently Nginx, occasionally Apache) that can both serve a Javascript UI's built static files and also route HTTP requests to a backend service. You can add an authentication control there. The mechanics of doing this would be specific to your choice of proxy server.
Consider that any information you include in the Dockerfile or docker build --args options can be easily retrieved by anyone who has your image (looking at its docker history, or docker run a debug shell on it). You may need to inject the credentials at run time using a bind mount if you think they're sensitive and you're not storing your image somewhere with strong protections (if anyone can docker pull it from Docker Hub).
I am trying to find an effective way to use the docker remote API in a secure way.
I have a docker daemon running in a remote host, and a docker client on a different machine. I need my solution to not be client/server OS dependent, so that it would be relevant to any machine with a docker client/daemon etc.
So far, the only way I found to do such a thing is to create certs on a Linux machine with openssl and copy the certs to the client/server manually, as in this example:
https://docs.docker.com/engine/security/https/
and then configure docker on both sides to use the certificates for encryption and authentication.
This method is rather clunky in my opinion, because some times it's a problem to copy files and put them on each machine I want to use remote API from.
I am looking for something more elegant.
Another solution I've found is using a proxy for basic HTTP authentication, but in this method the traffic is not encrypted and it is not really secure that way.
Does anyone have a suggestion for a different solution or for a way to improve one of the above?
Your favorite system automation tool (Chef, SaltStack, Ansible) can probably directly manage the running Docker containers on a remote host, without opening another root-equivalent network path. There are Docker-oriented clustering tools (Docker Swarm, Nomad, Kubernetes, AWS ECS) that can run a container locally or remotely, but you have less control over where exactly (you frequently don't actually care) and they tend to take over the machines they're running on.
If I really had to manage systems this way I'd probably use some sort of centralized storage to keep the TLS client keys, most likely Vault, which has the property of storing the keys encrypted, requiring some level of authentication to retrieve them, and being able to access-control them. You could write a shell function like this (untested):
dockerHost() {
mkdir -p "$HOME/.docker/$1"
JSON=$(vault kv get -format=json "secret/docker/$1")
for f in ca.pem cert.pem key.pem; do
echo "$JSON" | jq ".data.data.[\"$f\"]" > "$HOME/.docker/$1/$f"
done
export DOCKER_HOST="https://$1:2376"
export DOCKER_CERT_PATH="$HOME/.docker/$1"
}
While your question makes clear you understand this, it bears repeating: do not enable unauthenticated remote access to the Docker daemon, since it is trivial to take over a host with unrestricted root access if you can access the socket at all.
Based on your comments, I would suggest you go with Ansible if you don't need the swarm functionality and require only single host support. Ansible only requires SSH access which you probably already have available.
It's very easy to use an existing service that's defined in Docker Compose or you can just invoke your shell scripts in Ansible. No need to expose the Docker daemon to the external world.
A very simple example file (playbook.yml)
- hosts: all
tasks:
- name: setup container
docker_container:
name: helloworld
image: hello-world
Running the playbook
ansible-playbook -i username#mysshhost.com, playbook.yml
Ansible provides pretty much all of the functionality you need to interact with Docker via its module system:
docker_service
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm. Supports compose versions 1 and 2.
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_facts
Inspects one or more images in the Docker host’s image cache, providing the information as facts for making decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.
I tried to set up artifactory as docker registry as shown in this video: http://www.jfrog.com/video/artifactory-docker-integration/
However, I don't have SSL installed in artifactory so I'm using the --insecure-registry flag. (as shown in error in docker build publish plugin and Remote access to a private docker-registry)
Anyway, I don't know how to figure out the artifactory as docker registry url so I can do this:
curl -k -uusername:password "http://sdpvvrwm812.ib.tor.company.com:8081/artifactory/api/docker/docker-images"
This page, http://www.jfrog.com/confluence/display/RTF/Docker+Repositories, shows at the bottom that something called a reverse proxy might be needed? Is this true and if so how do I install such a thing?
The reason behind requiring a reverse proxy in front of Artifactory is related to a Docker client limitation - you cannot use a context path when providing the registry path, e.g sdpvvrwm812.ib.tor.company.com:8081/artifactory/api/docker/docker-images is not valid.The Docker client assumes you are working with one big registry for all images, while Artifactory allows you to manage multiple registries (repositories) on the same server.
To overcome this issue you should setup a reverse proxy which will allow the Docker client to send requests to the root context and forward those requests to the correct repository path in Artifactory. For example, forwarding requests from sdpvvrwm812.ib.tor.company.com:8888/ to sdpvvrwm812.ib.tor.company.com:8081/artifactory/api/docker/docker-images
The Artifactory documentation contains configuration examples for NginX, Apache and HAProxy.
Notice that there are different configurations for Docker registry API v1 and v2.
After setting up the reverse proxy, the Docker client should use the proxy in order to access Artifactory.
If you are using the --insecure-registry flag there is no need to configure an SSL certificate. With older versions of Docker, before this flag was introduced (Docker 1.3.2) it was a mandatory requirement.
I am looking for an open source solution to sync several docker registries. Could anybody give me some hints about this?
The easiest way to set up a docker registry is using the official docker registry. This allows you to easily run a registry server with a configurable storage backend. As others have mentioned you can use S3 or Google Cloud storage. (I have personally used Google Cloud storage and have not run into any problems).
I would also check out this digital ocean post about setting up a docker registry: How to setup a docker registry.
Since you are interested in clustering, all you would need to do at this point is setup multiple registry servers with the same bucket as a storage backend. Then put a load balancer such as haproxy or nginx in front of them. This will give you the fault tolerance and load balancing that you are looking for.