Inter-process communication between docker container and its host - docker

I am setting up a continuous integration system with gtilab and docker. For some reason I have to commit the current container as a new docker image in one stage of the CI system, so I can reuse the image in subsequent stages.
To summarize, I have to execute this command:
docker commit $CONTAINER_ID $NEW_IMAGE_NAME
But from inside a container. And later from another container:
docker rmi $NEW_IMAGE_NAME
One solution might be setting up ssh public key authentication and:
ssh user#172.17.0.1 docker ...
In which, 172.17.0.1 is the host IP address. I can restrict the ssh user to access only specific commands for security.
Another solution is to create a public service on a network socket in the host. But what is the best approach here? I prefer a secure solution so from inside the container you can only commit a docker image and delete the created image (and not other images). So, a wild ssh is not so secure. And, I prefer a more portable solution that doesn't relay on the host IP address. What do you suggest?

Issue/Question
How to execute some Docker API call from within a container?
Did you know
Did you know that the Docker API can be served on a networking socket by adding the option -H tcp://0.0.0.0:2375? Hence you can execute calls from within the containers directly to the Docker daemon.
Note that you can (and should) also enable TLS for this socket, cf man docker daemon.
Security is a must
If ever this option does not seem clean or secure enough, then a local networking* service will be needed. I would suggest a web API in java or python that would respond to two different call that could be:
commit: http[s]://localhost:service-port/commit?containder_id=123456789&image_name=my_name
rmi: http[s]://localhost:service-port/rmi?containder_id=123456789
I did not understand in your comment what user you refer to.
The local service would then answer a HTTP 201 Created if the image is created, or HTTP 406 Not Acceptable if the name already exists. It could also checks if no more than one rmi in a raw are performed. It could answer a HTTP 204 Not Content if no image with this ID exists, HTTP 403 Forbidden is the image cannot be deleted or HTTP 200 OK if everything went well. In a last resort it could answer HTTP 418 I'm a teapot.
*: local-networking is a fast, mostly secure, easy to deploy and natively works with Docker. FIFO, see man mkfifo, could also be used but would require another shared volume (for the FIFO file) and, maybe, more code.

Related

Can (Should) I Run a Docker Container with Same host name as the Docker Host?

I have a server application (that I cannot change) that, when you connect as a client, will give you other URLs to interact with. Those URLs are also part of the same server so the URL advertised uses the hostname of a docker container.
We are running in a mixed economy (some docker containers, some regular applications). We actually need to set up where we have the server running as a docker application on a single VM, and that server will be accessed by non-docker clients (as well as docker clients not running on the same docker network).
So you have a server hostname (the docker container) and a docker hostname (the hostname of the VM running docker).
The client's initial connection is to: dockerhostname:1234 but when the server sends URLs to the client, it sends: serverhostname:5678 ... which is not resolvable by the client. So far, we've addressed this by adding "server hostname " to the client's /etc/hosts file but this is a pain to maintain.
I have also set the --hostname of the server docker container to the same name as the docker host and it has mostly worked but I've seen where a docker container running on the same docker network as the server had issues connecting to the server.
I realize this is not an ideal docker setup. We're migrating from a history of delivering as rpm's to delivering containers .. but it's a slow process. Our company has lots of applications.
I'm really curious if anyone has advice/lessons learned with this situation. What is the best solution to my URL problem? (I'm guessing it is the /etc/hosts we're already doing)
You can do port-mapping -p 8080:80
How you build and run your container?
With a shell command, dockerfile or yml file?
Check this:
docker port
Call this and it will work:
[SERVERIP][PORT FROM DOCKERHOST]
To work with hostnames you need DNS or use hosts file.
The hosts file solution is not a good idea, it's how the internet starts in the past ^^
If something change you have to change all hosts files on every client!
Or use a static ip for your container:
docker network ls
docker network create my-network
docker network create --subnet=172.18.0.0/16 mynet123
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
Assign static IP to Docker container
You're describing a situation that requires a ton of work. The shortest path to success is your "adding things to /etc/hosts file" process. You can use configuration management, like ansible/chef/puppet to only have to update one location and distribute it out.
But at that point, you should look into something called "service discovery." There are a ton of ways to skin this cat, but the short of it is this. You need some place (lazy mode is DNS) that stores a database of your different machines/services. When a machine needs to connect to another machine for a service, it asks that database. Hence the "service discovery" part.
Now implementing the database is the hardest part of this, there are a bunch of different ways, and you'll need to spend some time with your team to figure out what is the best way.
Normally running an internal DNS server like dnsmasq or bind should get you most of the way, but if you need something like consul that's a whole other conversation. There are a lot of options, and the best thing to do is research, and audit what you actually need for your situation.

Adding user management for our container via dockerfile

We're using docker for our project. We've a monitoring service (for our native application) which is running on Docker.
Currently there is no user management for this monitoring service. Is there any way we can add user management from Dockerfile?
Please note that I'm not looking docker container user management.
In simple words functionality that I'm looking for is:
Add any user and password in dockerfile.
While accessing external IP, same user and password must be provided to view running monitoring service.
From your few information about your setup, I would say the authentification should be handled by your monitoring service. If this was a kind of webapp you could use a simple basic auth as a first step.
Docker doesn't know what's running in your container, and its networking setup is limited to a simple pass-through between containers or from host ports to containers. It's typical to run programs with many different network protocols in Docker (Web servers, MySQL, PostgreSQL, and Redis all have different wire protocols) and there's no way Docker on its own could inject an authorization step.
Many non-trivial Docker setups include some sort of HTTP reverse proxy container (frequently Nginx, occasionally Apache) that can both serve a Javascript UI's built static files and also route HTTP requests to a backend service. You can add an authentication control there. The mechanics of doing this would be specific to your choice of proxy server.
Consider that any information you include in the Dockerfile or docker build --args options can be easily retrieved by anyone who has your image (looking at its docker history, or docker run a debug shell on it). You may need to inject the credentials at run time using a bind mount if you think they're sensitive and you're not storing your image somewhere with strong protections (if anyone can docker pull it from Docker Hub).

Protecting Docker daemon

Am I understanding correctly that the docs discuss how to protect the Docker daemon when commands are issued (docker run,...) with a remote machine as the target? When controlling docker locally this does not concern me.
Running Docker swarm does not require this step either as the security between the nodes is handled by Docker automatically. For example, using Portainer in a swarm with multiple agents does not require extra security steps due to overlay network in a swarm being encrypted by default.
Basically, when my target machine will always be localhost there are no extra security steps to be taken, correct?
Remember that anyone who can run any Docker command can almost trivially get unrestricted root-level access on the host:
docker run -v/:/host busybox sh
# vi /host/etc/passwd
So yes, if you're using a remote Docker daemon, you must run through every step in that document, correctly, or your system will get rooted.
If you're using a local Docker daemon and you haven't enabled the extremely dangerous -H option, then security is entirely controlled by Unix permissions on the /var/run/docker.sock special file. It's common for that socket to be owned by a docker group, and to add local users to that group; again, anyone who can run docker ps can also trivially edit the host's /etc/sudoers file and grant themselves whatever permissions they want.
So: accessing docker.sock implies trust with unrestricted root on the host. If you're passing the socket into a Docker container that you're trusting to launch other containers, you're implicitly also trusting it to not mount system directories off the host when it does. If you're trying to launch containers in response to network requests, you need to be insanely careful about argument handling lest a shell-injection attack compromise your system; you are almost always better off finding some other way to run your workload.
In short, just running Docker isn't a free pass on security concerns. A lot of common practices, if convenient, are actually quite insecure. A quick Web search for "Docker cryptojacking" can very quickly find you the consequences.

use docker's remote API in a secure manner

I am trying to find an effective way to use the docker remote API in a secure way.
I have a docker daemon running in a remote host, and a docker client on a different machine. I need my solution to not be client/server OS dependent, so that it would be relevant to any machine with a docker client/daemon etc.
So far, the only way I found to do such a thing is to create certs on a Linux machine with openssl and copy the certs to the client/server manually, as in this example:
https://docs.docker.com/engine/security/https/
and then configure docker on both sides to use the certificates for encryption and authentication.
This method is rather clunky in my opinion, because some times it's a problem to copy files and put them on each machine I want to use remote API from.
I am looking for something more elegant.
Another solution I've found is using a proxy for basic HTTP authentication, but in this method the traffic is not encrypted and it is not really secure that way.
Does anyone have a suggestion for a different solution or for a way to improve one of the above?
Your favorite system automation tool (Chef, SaltStack, Ansible) can probably directly manage the running Docker containers on a remote host, without opening another root-equivalent network path. There are Docker-oriented clustering tools (Docker Swarm, Nomad, Kubernetes, AWS ECS) that can run a container locally or remotely, but you have less control over where exactly (you frequently don't actually care) and they tend to take over the machines they're running on.
If I really had to manage systems this way I'd probably use some sort of centralized storage to keep the TLS client keys, most likely Vault, which has the property of storing the keys encrypted, requiring some level of authentication to retrieve them, and being able to access-control them. You could write a shell function like this (untested):
dockerHost() {
mkdir -p "$HOME/.docker/$1"
JSON=$(vault kv get -format=json "secret/docker/$1")
for f in ca.pem cert.pem key.pem; do
echo "$JSON" | jq ".data.data.[\"$f\"]" > "$HOME/.docker/$1/$f"
done
export DOCKER_HOST="https://$1:2376"
export DOCKER_CERT_PATH="$HOME/.docker/$1"
}
While your question makes clear you understand this, it bears repeating: do not enable unauthenticated remote access to the Docker daemon, since it is trivial to take over a host with unrestricted root access if you can access the socket at all.
Based on your comments, I would suggest you go with Ansible if you don't need the swarm functionality and require only single host support. Ansible only requires SSH access which you probably already have available.
It's very easy to use an existing service that's defined in Docker Compose or you can just invoke your shell scripts in Ansible. No need to expose the Docker daemon to the external world.
A very simple example file (playbook.yml)
- hosts: all
tasks:
- name: setup container
docker_container:
name: helloworld
image: hello-world
Running the playbook
ansible-playbook -i username#mysshhost.com, playbook.yml
Ansible provides pretty much all of the functionality you need to interact with Docker via its module system:
docker_service
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm. Supports compose versions 1 and 2.
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_facts
Inspects one or more images in the Docker host’s image cache, providing the information as facts for making decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.

Remote Docker daemon with public access from any client?

Docker has a relatively coherent guide regarding running a docker daemon with remote access over HTTPS: https://docs.docker.com/articles/https/
What it does not address, is, how can the daemon be exposed so that clients of varying hosts may access it, rather than just one static known-before-hand host?
The first use-case is that I want to use/try/test stuff against the daemon from my local machine. I expect that sooner or later different production hosts will also use the daemon such as if our organization decides to use (or trial run) different build systems that have to interact with the daemon (which is what we are going to do with CircleCI as it builds our projects).
Thanks.
It is not limited to one host.
Follow the instructions, then copy the certificate to each client that will connect.

Resources