How can you write a docker daemon that can kick off other docker containers as needed? - docker

I've seen several options for how a docker container can communicate directly with its host system but they all seem kind of sneaky. For instance, it appears one can start a container and bind (using -v) the in-container docker executable to the host's docker executable. One can send messages to the host using a networking protocol. It also appears that the --privilege flag might help as well.
Each of these methods appears to have drawbacks and security concerns. My bigger question is if this architecture is even the best approach.
Our goal is to have a docker daemon process running, polling a database being used as a queue. (I know this is frowned upon in some ways but our traffic is very low and internal. Performance for this sort of queue is not an issue.) When the docker daemon detects that there is work to be done, it kicks off another docker container to handle that work. That container dies when it is finished. Each container belongs to a "system" and will run load on that system. Each system can only have one container running load on it.
Is this a paradigm that makes sense?
Would the daemon be better off as just a host-level process? A Python script, for instance, instead of a docker container?
Is Docker meant to be used this way? Am I just missing where, in the Docker documentation, it tells me how to do this?
Are my "sneaky" ideas above not so sneaky, after all?
I understand there is an opportunity for opinion here. I am looking for some concise best practices.
Thanks, in advance!

The preferred solution that I've seen the most is to install the docker binaries in a container, and then mount the /var/run/docker.sock into the container. The Dockerfile I have for something similar looks like:
FROM upsteam:latest
ARG DOCKER_GID=999
USER root
# install docker
RUN curl -sSL https://get.docker.com/ | sh
# app setup goes here
# configure user with access to docker
RUN groupmod -g ${DOCKER_GID} docker && \
usermod -aG docker appuser
USER appuser
And then it's run with:
docker run -d --name myapp -v /var/run/docker.sock:/var/run/docker.sock myapp
This would be the most efficient solution since you remove the network bandwidth. And it removes any network vulnerabilities, either from an open port, or from including the TLS cert inside your container which could accidentally leak with something like a lost backup.

Related

Access files on host server from the Meteor App deployed with Meteor Up

I have a Meteor App deployed with Meteor UP to Ubuntu.
From this App I need to read a file which is located outside App container on the host server.
How can I do that?
I've tried to set up volumes in the mup.js but no luck. It seems that I'm missing how to correctly provide /host/path and /container/path
volumes: {
// passed as '-v /host/path:/container/path' to the docker run command
'/host/path': '/container/path',
'/second/host/path': '/second/container/path'
},
Read the docs for Docker mounting volumes but obviously can't understand it.
Let's say file is in /home/dirname/filename.csv.
How to correctly mount it into App to be able to access it from the Application?
Or maybe there are other possibilities to access it?
Welcome to Stack Overflow. Let me suggest another way of thinking about this...
In a scalable cluster, docker instances can be spun up and down as the load on the app changes. These may or may not be on the same host computer, so building a dependency on the file system of the host isn't a great idea.
You might be better to think of using a file storage mechanism such as S3, which will scale on its own, and disk storage limits won't apply.
Another option is to determine if the files could be stored in the database.
I hope that helps
Let's try to narrow the problem down.
Meteor UP is passing the configuration parameter volumes directly on to docker, as they also mention in the comment you included. It therefore might be easier to test it against docker directly - narrowing the components involved down as much as possible:
sudo docker run \
-it \
--rm \
-v "/host/path:/container/path" \
-v "/second/host/path:/second/container/path" \
busybox \
/bin/sh
Let me explain this:
sudo because Meteor UP uses sudo to start the container. See: https://github.com/zodern/meteor-up/blob/3c7120a75c12ea12fdd5688e33574c12e158fd07/src/plugins/meteor/assets/templates/start.sh#L63
docker run we want to start a container.
-it to access the container (think of it like SSH'ing into the container).
--rm to automatically clean up - remove the container - after we're done.
-v - here we give the volumes as you define it (I here took the two directories example you provided).
busybox - an image with some useful tools.
/bin/sh - the application to start the container with
I'd expect that you also cannot access the files here. In this case, dig deeper on why you can't make a folder accessible in Docker.
If you can, which would sound weird to me, you can start the container and try to access into the container by running the following command:
docker exec -it my-mup-container /bin/sh
You can think of this command like SSH'ing into a running container. Now you can check around if it really isn't there, if the credentials inside the container are correct, etc.
At last, I have to agree it #mikkel, that it's not a good option to mount a local directoy, but you can now start looking into how to use docker volume to mount a remote directory. He mentioned S3 by AWS, I've worked with AzureFiles on Azure, there are plenty of possibilities.

Detach container from host console

I am creating a docker container with Ubuntu:16.04 image using python docker package. I am passing tty as True and detach as True to the client.containers.run() function. The container starts with /sbin/init process. The container is created successfully. But the problem is, the login prompt on my host machine is replaced with the container login prompt on my host machine console. As a result, I am not able to the login on the machine on the console. SSH connection to the machine work fine.
This happens even when I run my python script after connecting SSH to the machine. I tried different options like setting tty to False, setting stdout to False, setting the environment variable TERM to xterm in the container, but nothing help.
It would be really great if someone can suggest a solution for this problem.
My script is very simple:
import docker
client = docker.from_env()
container = client.containers.run('ubuntu:16.04', '/sbin/init', privileged=True,
detach=True, tty=True, stdin_open=True, stdout=False, stderr=False,
environment=['TERM=xterm'])
I am not using any dockerfile.
I have been able to figure out that this problem happens when I start container in privileged mode. If I do this, the /sbin/init process launches /sbin/agetty processes which causes /dev/tty to be attached to the container. I need to figure out a way to start /sbin/init in such a way that it does not create /sbin/agetty processes.
/sbin/init in Ubuntu is a service called systemd. If you look at the linked page it does a ton of things – configures various kernel parameters, mounts filesystems, configures the network, launches getty process, .... Many of these things require changing host-global settings, and if you launch a container with --privileged you're allowing systemd to do that.
I'd give two key recommendations on this command:
Don't run systemd in Docker. If you really need a multi-process init system, supervisord is popular, but prefer single-process containers. If you know you need some init(8) (process ID 1 has some responsibilities) then tini is another popular option.
Don't directly run bare Linux distribution images. Whatever software you're trying to run, it's almost assuredly not in an alpine or ubuntu image. Build a custom image that has the software you need and run that; you should set up its CMD correctly so that you can docker run the image without any manual setup.
Also remember that the ability to run any Docker command at all implies unrestricted root-level access over the host. You're seeing some of that here where a --privileged container is taking over the host's console; it's also very very easy to read and edit files like the host's /etc/shadow and /etc/sudoers. There's nothing technically wrong with the kind of script you're showing, but you need to be extremely careful with standard security concerns.

Start service using systemctl inside docker container

In my Dockerfile I am trying to install multiple services and want to have them all start up automatically when I launch the container.
One among the services is mysql and when I launch the container I don't see the mysql service starting up. When I try to start manually, I get the error:
Failed to get D-Bus connection: Operation not permitted
Dockerfile:
FROM centos:7
RUN yum -y install mariadb mariadb-server
COPY start.sh start.sh
CMD ["/bin/bash", "start.sh"]
My start.sh file:
service mariadb start
Docker build:
docker build --tag="pbellamk/mariadb" .
Docker run:
docker run -it -d --privileged=true pbellamk/mariadb bash
I have checked the centos:systemd image and that doesn't help too. How do I launch the container with the services started using systemctl/service commands.
When you do docker run with bash as the command, the init system (e.g. SystemD) doesn’t get started (nor does your start script, since the command you pass overrides the CMD in the Dockerfile). Try to change the command you use to /sbin/init, start the container in daemon mode with -d, and then look around in a shell using docker exec -it <container id> sh.
Docker is designed around the idea of a single service/process per container. Although it definitely supports running multiple processes in a container and in no way stops you from doing that, you will run into areas eventually where multiple services in a container doesn't quite map to what Docker or external tools expect. Things like moving to scaling of services, or using Docker swarm across hosts only support the concept of one service per container.
Docker Compose allows you to compose multiple containers into a single definition, which means you can use more of the standard, prebuilt containers (httpd, mariadb) rather than building your own. Compose definitions map to Docker Swarm services fairly easily. Also look at Kubernetes and Marathon/Mesos for managing groups of containers as a service.
Process management in Docker
It's possible to run systemd in a container but it requires --privileged access to the host and the /sys/fs/cgroup volume mounted so may not be the best fit for most use cases.
The s6-overlay project provides a more docker friendly process management system using s6.
It's fairly rare you actually need ssh access into a container, but if that's a hard requirement then you are going to be stuck building your own containers and using a process manager.
You can avoid running a systemd daemon inside a docker container altogether. You can even avoid to write a special start.sh script - that is another benefit when using the docker-systemctl-replacement script.
The docker systemctl.py can parse the normal *.service files to know how to start and stop services. You can register it as the CMD of an image in which case it will look for all the systemctl-enabled services - those will be started and stopped in the correct order.
The current testsuite includes testcases for the LAMP stack including centos, so it should run fine specifically in your setup.
I found this project:
https://github.com/defn/docker-systemd
which can be used to create an image based on the stock ubuntu image but with systemd and multiuser mode.
My use case is the first one mentioned in its Readme. I use it to test the installer script of my application that is installed as a systemd service. The installer creates a systemd service then enables and starts it. I need CI tests for the installer. The test should create the installer, install the application on an ubuntu, and connect to the service from outside.
Without systemd the installer would fail, and it would be much more difficult to write the test with vagrant. So, there are valid use cases for systemd in docker.

Health Check command for docker(1.12) container (Not in Dockerfile!)

Docker Version 1.12,
I got a Dockerfile from Here
FROM nginx:latest
RUN touch /marker
ADD ./check_running.sh /check_running.sh
RUN chmod +x /check_running.sh
HEALTHCHECK --interval=5s --timeout=3s CMD ./check_running.sh
I'm able to roll the updates and health checks with check_running.sh shell script. Here, the check_running.sh script is copied to image, so the launched container has it.
Now, my question is there any way to Health Check from out side of the container and script also located outside.
I'm excepting a health check command to get the container performance(Depends on what we wrote in script), IF the container is not performing good it should roll-back to previous version ( Kind of a process that monitors the containers, if it is not good, it should roll-back to previous)
Thanks
is there any way to Health Check from out side of the container and
script also located outside.
Kind of a process that monitors the containers, if it is not good, it should roll-back to previous
You have several options:
From outside, you run a process inside the container to check its health with docker exec. This could be any sequence of shell commands. If you want to keep your scripts outside of the container, you might use something like cat script.sh | docker exec -it container sh -s.
You check the container health from outside the container, e.g. by looking for a process that should be running inside the container (try to set a security profile and use ps -Zax or try looking for children of the daemon), or you can give each container a specific user ID with --user 12345 and then look for that or e.g. connecting to its services. You'd have to make sure it's running inside the right container. You can access the containers' filesystem below /var/lib/docker/devicemapper/mnt/<hash>/rootfs.
You run a HEALTHCHECK inside the container and check its health with docker inspect --format='{{json .State.Health.Status}}' <containername> combined with e.g. a line in the Dockerfile:
HEALTHCHECK CMD wget -q -s http://some.host to check the container has internet access.
I'd recommend option 3, because it's likely to be more compatible with other tools in the future.
Just got comment from a blog!. He refered Docker documentation HealthCheck section. There is a health check "option" for docker command to "override" the dockerfile defaults. I have not checked yet!. But it seems good for me to get what I want. Will check and update the answer!
The Docker inspect command lets you view the output of commands that succeed or fail
docker inspect --format='{{json .State.Health}}' your-container-name
That's not available with the Dockerfile HEALTHCHECK option, all checks run inside the container. To me, this is a good thing since it avoids potentially untrusted code running directly on the host, and it allows you to include the dependencies for the health check inside your container.
If you need to monitor your container from outside, you'll need to use another tool or monitoring application, there are quite a few of them out there.
You can view the results of the health check by running docker inspect on a container.
Another approach depending on your application would be to expose a /healthz endpoint that the healthcheck also probes, this way it can be queried externally or internally as needed.

Is there a "multi-user" Docker mode, e.g. for scientific clusters?

I want to use Docker for isolating scientific applications for the use in a HPC Unix cluster. Scientific software often has exotic dependencies so isolating them with Docker appears to be a good idea. The programs are to be run as jobs and not as services.
I want to have multiple users use Docker and the users should be isolated from each other. Is this possible?
I performed a local Docker installation and had two users in the docker group. The call to docker images showed the same results for both users.
Further, the jobs should be run under the calling users's UID and not as root.
Is such a setup feasible? Has it been done before? Is this documented anywhere?
Yes there is! It's called Singularity and it was designed with scientific applications and multi user HPCs. More at http://singularity.lbl.gov/
OK, I think there will be more and more solutions pop up for this. I'll try to update the following list in the future:
udocker for executing Docker containers as users
Singularity (Kudos to Filo) is another Linux container based solution
Don't forget about DinD (Docker in Docker): jpetazzo/dind
You could dedicate one Docker per user, and within one of those docker containers, the user could launch a job in a docker container.
I'm also interested in this possibility with Docker, for similar reasons.
There are a few of problems I can think of:
The Docker Daemon runs as root, providing anyone in the docker group
with effective host root permissions (e.g. leak permissions by
mounting host / dir as root).
Multi user Isolation as mentioned
Not sure how well this will play with any existing load balancers?
I came across Shifter which may be worth a look an partly solves #1:
http://www.nersc.gov/research-and-development/user-defined-images/
Also I know there is discussion to use kernel user namespaces to provide mapping container:root --> host:non-privileged user but I'm not sure if this is happening or not.
There is an officially supported Docker image that allows one to run Docker in Docker (dind), available here: https://hub.docker.com/_/docker/. This way, each user can have their own Docker daemon. First, start the daemon instance:
docker run --privileged --name some-docker -d docker:stable-dins
Note that the --privileged flag is required. Next, connect to that instance from a second container:
docker run --rm --link some-docker:docker docker:edge version

Resources