RUN in Dockerfile with systemd as pid 1 - docker

Is it possible to use RUN in a dockerfile while having systemd as pid 1?
I am trying to execute an install script that requires systemd to be present and running on the system, inside a dockerfile. I.e.
FROM debian:stable
RUN apt install -y systemd
RUN someInstallScriptThatRequiresSystemd.sh

Is it possible to use RUN in a dockerfile while having systemd as pid 1?
No. Each RUN command generally runs in its own container, and the command itself (or the sh -c wrapper Docker provides) will be pid 1.
Also remember that a Docker image doesn't contain running processes, only a filesystem image and metadata that says how to run a container. You can't persist an image with systemd or other services running, and since the image doesn't container running services, it doesn't make sense to restart a service in a Dockerfile.
Systemd wants to control a lot of things, most of which are host-level things that a container shouldn't be thinking about. You usually shouldn't run it in a container at all; if you must, the better setups remove most of the built-in system-setup tasks. Better is to use a lighter-weight supervisor like supervisord; better still is to run one process per container, and a minimal init like tini if you need it.
If you just need to let this installer run systemctl, you can cause that "command" to exist:
RUN ln -s /bin/true /sbin/systemctl
RUN systemctl restart my-service # does nothing, successfully

Related

Docker command difference

I am new to docker container. Can someone please tell me what is difference between these two commands. In my knowledge, have the same out put than why we use the bash command.
docker run -it ubuntu
docker run -it ubuntu bash
In docker, we run a linux container. As you know, a linux system is alive when it's init 0 service is alive. 'init 0' is kind of the heart of a linux system. when 'init 0' is killed, the linux system also dies.
In a containerized architecture, you run a container for simply one purpose i.e. to simply run one service. we want if the service fails, the container also dies. so we define the servcie as init 0 job for the container.
when you run docker run -it ubuntu bash, here, bash is the init 0 job for the container. As soon as you exit from bash, the container stops working.
Instead of using bash you can also try another commands like #Shmuel suggested.
Well, when we create custom images, often we want to pre-define default 'init 0' job for our custom image. If the init 0' is predefined, you don't need to mention it in docker run command.
In ubuntu image, the pre-defined 'init 0' job is bash. So, if you don't mention bash in the run command, it works the same.
docker run -it ubuntu let's you run command inside the container.
The bash is the command to run.
For example instead you can run
docker run -it ubuntu ls /home
This will list the /home dir inside the container.

docker exit immediately after launching apache and neo4j

I have a script /init that launches apache and neo4j. This script is already in the image ubuntu:14. The following is the content of /init:
service apache2 start
service neo4j start
From this image, I am creating another image with the following dockerfile
FROM ubuntu:v14
EXPOSE 80 80
ENTRYPOINT ["/init"]
When I run the command docker run -d ubuntu:v15, the container starts and then exit. As far as I understood, -d option runs the container in the background. Also, the script\init launches two daemons. Why does the container exit ?
In fact, I think your first problem is the #! in init file, if you did not add something like #!/bin/bash at the start, container will complain like next:
shubuntu1#shubuntu1:~$ docker logs priceless_tu
standard_init_linux.go:207: exec user process caused "exec format error"
But even you fix above problem, you will still can't start your container, the reason same as other folks said: the PID 1 should always there, in your case after service xxx start finish, the PID 1 exit which will also result in container exit.
So, to conquer this problem you should set one command never exit, a minimal workable example for your reference:
Dockerfile:
FROM ubuntu:14.04
RUN apt-get update && \
apt-get install -y apache2
COPY init /
RUN chmod +x /init
EXPOSE 80
ENTRYPOINT ["/init"]
init:
#!/bin/bash
# you can add other service start here
# e.g. service neo4j start as you like if you have installed it already
# next will make apache run in foreground, so PID1 not exit.
/usr/sbin/apache2ctl -DFOREGROUND
When your Dockerfile specifies an ENTRYPOINT, the lifetime of the container is exactly the length of whatever its process is. Generally the behavior of service ... start is to start the service as a background process and then return immediately; so your /init script runs the two service commands and completes, and now that the entrypoint process is completed, the container exits.
Generally accepted best practice is to run only one process in a container. That's especially true when one of the processes is a database. In your case there are standard Docker Hub Apache httpd and neo4j images, so I'd start by using an orchestration tool like Docker Compose to run those two containers side-by-side.

Docker exit after starting a command which goes into background. Then how can we take benifit of that service

I am starting a command which goes into background on its own. On the terminal it appears that the command shows some output on the screen and exited.
On my host i can check that the command is still running by finding it in
$ ps -aux
So the docker thinks the command is done and it exits.
Based on that background running command i want to run another command using --exec.
So how to achieve this
A docker container lives as long as the process that you have specified it to run, has not exited.
Docker containers do not make use of daemons and services - you are supposed to run your process in the foreground of the container. This is the recommended usage of containers - although you can force it to do otherwise if you want to.
Something that has helped me a lot conceptually, is to think more of a docker container as a "process isolation" mechanism, and less of it as a box of software that you can start and stop.
You may find this guide useful if you want to start multiple processes in the container: https://docs.docker.com/config/containers/multi-service_container/
A little trick is to add an indefinitly running command to the end of your docker ENTRYPOINT or CMD. One commenly used is tail -f /dev/null, like this:
systemctl start myservice && tail -f /dev/null
I cannot say I can recommend this, but it will quite likely do what you want it to.
I will include a minimal example here, of how this can be used. Here's a Dockerfile where the ENTRYPOINT is specified to start a service (running in the background), and then tailing the null device, /dev/null:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y apache2
ENTRYPOINT service apache2 start && tail -f /dev/null
Build it with:
docker build -t servicetest:01 .
Start it with:
docker run -p 8080:80 servicetest:01
And visit http://localhost:8080 to see it working

Installing systemd inside a ubuntu14.04 docker container - Is it possible?

Am trying to install and configure openstack (devstack) inside docker container. While installing am getting the following error
"Failed to get D-Bus connection: No connection to service manager."
Later, I checked and found that its because of systemd problem. When I tried executing the command systemd
$>systemd
Am getting the following output.
Trying to run as user instance, but the system has not been booted with systemd.
Following are the things which am used.
Host machine OS : Ubuntu 14.04,
Docker Version : Docker version 1.12.4, build 1564f02,
Docker Container OS : Ubuntu 14.04
Can anyone help in this. Thanks in advance.
First of all, systemd expects /sys/fs/cgroup to be mounted. Additionally, you must make the container privileged, or else this happens:
docker run -v /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged -it --rm ubuntu
Then you can go ahead and run /bin/systemd --system --unit=basic.target from bash, and it should run normally (with some errors of course, because Docker does not virtualize an entire system, nor is the library:ubuntu image more than the minimum size required to run properly):
After you have systemd running (semi-)properly, you can simply use a docker stop to stop the container.
This post is based on my own research, a few weeks of it too, for a project I like to call initbuntu (originally I tried to get init running, but running systemd directly was my only solution after all my failed tries). The container will be available on Docker Hub as logandark/initbuntu, Soon™. For now, a broken copy (or not broken, I dunno) is available there at the time of posting.
Sources (kinda):
/sys/fs/cgroup: Here
systemd --system: A StackOverflow post I lost the link to.
Existing DevStack on Docker Project
First of all, you can get a preconfigured Dockerfile with DevStack Ocata/Pike on Docker here. The repository also contains further information on DevStack and containers.
Build Your Own Image
Running systemd in Docker is certainly possible and has been done before. I found Ubuntu 16.04 LTS is a good foundation for the Docker host as well as the base image.
Your systemd/DevStack Dockerfile needs this configuration, which also cleans up services you probably don't want inside a Docker container:
FROM ubuntu:16.04
#####################################################################
# Systemd workaround from solita/ubuntu-systemd and moby/moby#28614 #
#####################################################################
ENV container docker
# No need for graphical.target
RUN systemctl set-default multi-user.target
# Gracefully stop systemd
STOPSIGNAL SIGRTMIN+3
# Cleanup unneeded services
RUN find /etc/systemd/system \
/lib/systemd/system \
-path '*.wants/*' \
-not -name '*journald*' \
-not -name '*systemd-tmpfiles*' \
-not -name '*systemd-user-sessions*' \
-exec rm \{} \;
# Workaround for console output error moby/moby#27202, based on moby/moby#9212
CMD ["/bin/bash", "-c", "exec /sbin/init --log-target=journal 3>&1"]
If you intend to run OpenStack/DevStack inside said container, it might save you lots of trouble to start it privileged instead of defining separate security capabilities and volumes:
docker run \
--name devstack \
--privileged \
--detach \
image
To get a bash inside your new systemd container try this:
docker exec \
--tty \
--interactive \
devstack \
bash
Systemd should work inside properly configured container. You can run the container in privileged mood to run systemd.
"Systemd cannot run without SYS_ADMIN, less privileges than that won't work (see #2296 (comment)). Yes it's possible to make it "easier" (a tool that automatically sets these), but it'll still need certain privileges"
See this Github issue
After all docker is an application container and it runs the process which you specify at run time , after completing that process it will exit. May be you need an OS container or a virtual machine for your use case. See OS container vs Application Container here
In most cases the error messages comes up because an installer program has tried to run "systemctl start ". Unlike initscripts the systemctl command will not try execute the start script directly - instead it tries to contact the systemd daemon to execute the start sequence of the service. So all services have a common parent in the systemd daemon.
It can be quite overdone to run a systemd daemon inside a docker container just to start a service. You could use the systemctl-docker-replacement overwriting /usr/bin/systemctl in which case the target service is started without the help of a systemd daemon. It runs the ExecStart from the *.service file directly.

Puppet container wont start automatically

So I have created a puppet container for a certificate authority. It works, but does not start correctly. Here is my Dockerfile:
FROM centos:6
RUN yum update -y
RUN rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
RUN yum install -y ruby puppet-server
ADD puppet.conf /etc/puppet/puppet.conf
VOLUME ["/var/lib/puppet/ssl/"]
EXPOSE 9140
#NOTHING BELOW THIS COMMENT RUNS
RUN puppet master --verbose
CMD service puppetmaster start
CMD chkconfig puppetmaster on
CMD []
I can then start the container with the following run(note that I named the image ca-puppet):
docker run -d -p 9140:9140 -it --name ca-puppet \
-v /puppet-host/ssl:/var/lib/puppet/ssl \
ca-puppet bash
The issue is that I need to docker exec into the container and run the following commands to get it started and create the ca certificates in its ssl directory:
puppet master --verbose
service puppetmaster start
chkconfig puppetmaster on
I have a feeling I should be using some other Docker file commands to run the last 3 commands. What should that be?
There can only be one CMD instruction in a Dockerfile. If you list
more than one CMD then only the last CMD will take effect.
also
If the user specifies arguments to docker run then they will override
the default specified in CMD.
However, using the default process manager (e.g., SysV, systemd, etc) in Docker for most mainstream distros can cause problems (without making a lot of modifications). You generally don't need it, however -- particularly if you're only running one application (as is often considered best practice). In a Docker container, you generally want your primary application to be the first process (PID 1).
You can do this by not daemonizing puppet and start it as the default container command via something like:
CMD puppet master --verbose --no-daemonize
And use the Docker host to manage it (via restart policy, etc.).

Resources