Using the public/common docker's centos image I was installing some services that required a /etc/init directory and I had a failure. I have further noticed that initctl does not exist, meaning that init was not run.
How can the centos image be used with a fully functional init process ?
example:
docker run -t -i centos /bin/bash
file /etc/init
/etc/init: cannot open ... no such file or directory ( /etc/init )
initctl
bash: initctl: command not found
A Docker container is more analogous to a process than a VM. That process can spawn other processes though, and the sub-processes will run in the same container. A common pattern is to use a process supervisor like supervisord as described in the Docker documentation. In general though, it's usually recommended to try and run one process per container if you can (so that, for example, you can monitor and cap memory and CPU at the process level).
Related
I am writing a test case that tests installation and removal of Debian packages. The debian package had a systemd process in it. During the start up the systemd takes care of starting the service and all the logs are forwarded to systemd.
Since the test cases are aimed to run in the integration/end-end test pipe line and also the test is aimed to run inside a container. From inside container we should be able to do apt install <my_service.deb>. But the issue being containers do not have systemd in them.
A quick internet search resulted and alternative to systemd inside docker like supervisor and other being volume mounting and starting container in privilage mode –privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro.
How do people usually test debian installation and uninstallation of the debian package. Are there any alternative of having systemd inside a container ?
I am attempting to build influxdb from scratch, using it's rpm. In my scenario I can not use the prebuilt dockerhub image. I'm running into issues when I attempt to start influxdb inside the docker container using he command systemctl start influxdb.
I usually get the following error:
`System has not been booted with systemd as init system (PID 1). Can't operate.`
How can I start the influxdb service in the docker container?
Here my dockerfile:
FROM centos
COPY influxdb-1.7.0.x86_64.rpm /home
COPY influxdb.conf /etc/influxdb/influxdb.conf
WORKDIR /home
ENV INFLUXDB_VERSION 1.7
ARG INFLUXDB_CONFIG_PATH=/etc/influxdb/influxdb.conf
VOLUME /var/lib/influxdb
#VOLUME /sys/fs/cgroup:/sys/fs/cgroup:ro
EXPOSE 8086
RUN rpm -if influxdb-1.7.0.x86_64.rpm
#RUN systemctl start influxdb
CMD [ "/bin/bash" ]
There's not really any good reason to try to use systemd inside a container, since Docker itself (or Kubernetes, or whatever's running the containers) is managing the lifecycle. The official InfluxDB images, for example, basically just run influx. See also this answer for why it's not recommended to use systemd. If you can work out your issues with the official image, you'd have much better luck — but if not, you can build a similar image (see the link above).
I pulled centos6 image and made a container from it. I got its bash by:
$ docker run -i -t centos:centos6 /bin/bash
On the centos6 container, I could use "service" command without any problem. But when I pulled&used centos7 image:
$ docker run -i -t centos:centos7 /bin/bash
Both of "service" and "systemctl" didn't work. The error message is:
Failed to get D-Bus connection: Operation not permitted
My question is:
1. How are people developing without "service" and "systemctl" commands?
2. If I want to use, for example, httpd.service on the centos7 container, what should I do? Or maybe running services on a container is not recommended?
There is no process supervisor running inside either container. The service command in your CentOS 6 container works by virtue of the fact that it just runs a script from /etc/init.d, which by design ultimately launch a command in the background and return control to you.
CentOS 7 uses systemd, and systemd is not running inside your container, so there is nothing for systemctl to talk to.
In either situation, using the service or systemctl command is generally the wrong thing to do: you want to run a single application, and you want to run it in the foreground, so that your container continues to run (from Docker's perspective, a command that goes into the background has exited, and if that was pid 1 in the container, the container will exit).
How are people developing without "service" and "systemctl" commands?
They are starting their programs directly, by consulting the necessary documentation to figure out the appropriate command line.
If I want to use, for example, httpd.service on the centos7 container, what should I do? Or maybe running services on a container is recommended?
You would start the httpd binary using something like:
CMD ["httpd", "-DFOREGROUND"]
If you like to stick with service/sytemctl commands to start/stop services then you can do that in a centos7 container by using the docker-systemctl-replacement script.
I had some deployment scripts that were using th service start/stop commands on a real machine - and they work fine with a container. Without any further modification. When putting the systemctl.py script into the CMD then it will simply start all enabled services somewhat like the init-process on a real machine.
systemd is included but not enabled by default in CentOS 7 docker image. It is mentioned on the repository page along with steps to enable it.
https://hub.docker.com/_/centos/
Like most docker users, I periodically need to connect to a running container and execute various arbitrary commands via bash.
I'm using 17.06-CE with an ubuntu 16.04 image, and as far as I understand, the only way to do this without installing ssh into the container is via docker exec -it <container_name> bash
However, as is well-documented, for each bash shell process you generate, you leave a zombie process behind when your connection is interrupted. If you connect to your container often, you end up with 1000s of idle shells -a most undesirable outcome!
How can I ensure these zombie shell processes are killed upon disconnection -as they would be over ssh?
One way is to make sure the linux init process runs in your container.
In recent versions of docker there is an --init option to docker run that should do this. This uses tini to run init which can also be used in previous versions.
Another option is something like the phusion-baseimage project that provides a base docker image with this capability and many others (might be overkill).
I've deployed a container based on ubuntu:16.04
docker run -ti ubuntu:latest /bin/bash
I 've downloaded from apt nsnake , inside the /bin/Bash in the container, a game:
apt install nsnake
and I have not such game on my host.
Now I wanna know where nsnake's binaries are on host machine;
on host machine :
ps -e | grep nsnake
and then , taking PID:
file /proc/PID/exe
but instead of returning the file pointed from /proc/PID/exe , this last command gives me :
/proc/PID/exe: broken symbolic link to /usr/games/nsnake
So , the important question is :
is there a method to find the location of binaries of nsnake ?
Other interesting questions are :
why symlink is "broken " ?
if there is no reference to original bins inside the related /proc/PID/exe , how do the system know what code it has to run ?
Q. why symlink is "broken " ?
You are mixing the main pid namespace with the container's pid namespace. It is broken in your host, but it is not broken from the container's point of view.
PID namespace provides separation of processes. The PID Namespace removes the view of the system processes, and allows process ids to be reused including pid 1.
https://docs.docker.com/engine/reference/run/
Do the same that you are doing in your host, but do it inside the container. You will see that the pid (process id) of nsnake is a different number. Inside your container the symbolic link is not broken:
# docker exec -it <container-id> file /proc/231/exe
/proc/231/exe: symbolic link to /usr/games/nsnake
(you will need to install file utility inside the container, apt-get install file, or just do ls -l /proc/PID/exe)
Docs:
https://en.wikipedia.org/wiki/Linux_namespaces#Process_ID_.28pid.29
Q. if there is no reference to original bins inside the related /proc/PID/exe , how do the system know what code it has to run ?
The process that is running containerized (in your example /bin/bash) sees its own filesystem that is mounted by docker for you:
# Inside the container
root#d0fb6fdea3b5:/# mount
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/4FPUTTI4XND27BPHH7FS4JKJ4V:/var/lib/docker/overlay2/l/U65SX2N4JGA5X6TXGRJQERQWNX:/var/lib/docker/overlay2/l/OEX7NG4TZRGXBBFSSQ7Q3FXC5R:/var/lib/docker/overlay2/l/FXRLO27CABA4ZFNOFTOL2HFHP4:/var/lib/docker/overlay2/l/KBEK646A7PRLHLWM6CVJRMXSEH:/var/lib/docker/overlay2/l/PSRBIMSE36LW2MZEOSMM3XDG2Y,upperdir=/var/lib/docker/overlay2/5b867408de3a3915bc5f257aecaf73193083b3c8cc84c5d642810a3eaaeef550/diff,workdir=/var/lib/docker/overlay2/5b867408de3a3915bc5f257aecaf73193083b3c8cc84c5d642810a3eaaeef550/work)
...
In this case the storage driver is "overlay2". The Linux Kernel receives from /bin/bash process the system call that asks for a fork execution of /usr/games/nsnake and as it's expected, it will look for that binary in the filesystem that the container process can see.