Test a debian package installation - docker

I am writing a test case that tests installation and removal of Debian packages. The debian package had a systemd process in it. During the start up the systemd takes care of starting the service and all the logs are forwarded to systemd.
Since the test cases are aimed to run in the integration/end-end test pipe line and also the test is aimed to run inside a container. From inside container we should be able to do apt install <my_service.deb>. But the issue being containers do not have systemd in them.
A quick internet search resulted and alternative to systemd inside docker like supervisor and other being volume mounting and starting container in privilage mode –privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro.
How do people usually test debian installation and uninstallation of the debian package. Are there any alternative of having systemd inside a container ?

Related

Docker influxdb System has not been booted with systemd as init system (PID 1). Can't operate

I am attempting to build influxdb from scratch, using it's rpm. In my scenario I can not use the prebuilt dockerhub image. I'm running into issues when I attempt to start influxdb inside the docker container using he command systemctl start influxdb.
I usually get the following error:
`System has not been booted with systemd as init system (PID 1). Can't operate.`
How can I start the influxdb service in the docker container?
Here my dockerfile:
FROM centos
COPY influxdb-1.7.0.x86_64.rpm /home
COPY influxdb.conf /etc/influxdb/influxdb.conf
WORKDIR /home
ENV INFLUXDB_VERSION 1.7
ARG INFLUXDB_CONFIG_PATH=/etc/influxdb/influxdb.conf
VOLUME /var/lib/influxdb
#VOLUME /sys/fs/cgroup:/sys/fs/cgroup:ro
EXPOSE 8086
RUN rpm -if influxdb-1.7.0.x86_64.rpm
#RUN systemctl start influxdb
CMD [ "/bin/bash" ]
There's not really any good reason to try to use systemd inside a container, since Docker itself (or Kubernetes, or whatever's running the containers) is managing the lifecycle. The official InfluxDB images, for example, basically just run influx. See also this answer for why it's not recommended to use systemd. If you can work out your issues with the official image, you'd have much better luck — but if not, you can build a similar image (see the link above).

Docker: "service" command works but "systemctl" command doesn't work

I pulled centos6 image and made a container from it. I got its bash by:
$ docker run -i -t centos:centos6 /bin/bash
On the centos6 container, I could use "service" command without any problem. But when I pulled&used centos7 image:
$ docker run -i -t centos:centos7 /bin/bash
Both of "service" and "systemctl" didn't work. The error message is:
Failed to get D-Bus connection: Operation not permitted
My question is:
1. How are people developing without "service" and "systemctl" commands?
2. If I want to use, for example, httpd.service on the centos7 container, what should I do? Or maybe running services on a container is not recommended?
There is no process supervisor running inside either container. The service command in your CentOS 6 container works by virtue of the fact that it just runs a script from /etc/init.d, which by design ultimately launch a command in the background and return control to you.
CentOS 7 uses systemd, and systemd is not running inside your container, so there is nothing for systemctl to talk to.
In either situation, using the service or systemctl command is generally the wrong thing to do: you want to run a single application, and you want to run it in the foreground, so that your container continues to run (from Docker's perspective, a command that goes into the background has exited, and if that was pid 1 in the container, the container will exit).
How are people developing without "service" and "systemctl" commands?
They are starting their programs directly, by consulting the necessary documentation to figure out the appropriate command line.
If I want to use, for example, httpd.service on the centos7 container, what should I do? Or maybe running services on a container is recommended?
You would start the httpd binary using something like:
CMD ["httpd", "-DFOREGROUND"]
If you like to stick with service/sytemctl commands to start/stop services then you can do that in a centos7 container by using the docker-systemctl-replacement script.
I had some deployment scripts that were using th service start/stop commands on a real machine - and they work fine with a container. Without any further modification. When putting the systemctl.py script into the CMD then it will simply start all enabled services somewhat like the init-process on a real machine.
systemd is included but not enabled by default in CentOS 7 docker image. It is mentioned on the repository page along with steps to enable it.
https://hub.docker.com/_/centos/

Avoid docker exec zombie processes when connecting to containers via bash

Like most docker users, I periodically need to connect to a running container and execute various arbitrary commands via bash.
I'm using 17.06-CE with an ubuntu 16.04 image, and as far as I understand, the only way to do this without installing ssh into the container is via docker exec -it <container_name> bash
However, as is well-documented, for each bash shell process you generate, you leave a zombie process behind when your connection is interrupted. If you connect to your container often, you end up with 1000s of idle shells -a most undesirable outcome!
How can I ensure these zombie shell processes are killed upon disconnection -as they would be over ssh?
One way is to make sure the linux init process runs in your container.
In recent versions of docker there is an --init option to docker run that should do this. This uses tini to run init which can also be used in previous versions.
Another option is something like the phusion-baseimage project that provides a base docker image with this capability and many others (might be overkill).

Is it possible to restart docker container from inside it

I'd like to package Selenium grid exrtas into a docker image.
This service being run without using docker container can reboot the OS it's running in. I wonder if I can setup the container to restart by Selemiun grid extras service running inside the container.
I am not familiar with Selenium Grid, but as a general idea: you could mount a folder from the host as data volume, then let Selenium write information to there, like a flag file.
On the host, you have a scheduled task / cronjob running on the host that would check for this flag in the shared folder and if it has a certain status, you would invoke a docker restart from there.
Not sure if there are other more elegant solutions for this, but this is what came to my mind adhoc.
Update:
I just found this on the Docker forum:
https://forums.docker.com/t/how-can-i-run-docker-command-inside-a-docker-container/337
I'm not sure about CoreOS but normally you can manage your host
containers from within a container by mounting the Docker socket.
Such as
docker run -it -v /var/run/docker.sock:/var/run/docker.sock ubuntu:latest sh -c "apt-get update ; apt-get install docker.io -y ;
bash"
or
https://registry.hub.docker.com/u/abh1nav/dockerui/

Why there is no init / initctl on the docker centos image

Using the public/common docker's centos image I was installing some services that required a /etc/init directory and I had a failure. I have further noticed that initctl does not exist, meaning that init was not run.
How can the centos image be used with a fully functional init process ?
example:
docker run -t -i centos /bin/bash
file /etc/init
/etc/init: cannot open ... no such file or directory ( /etc/init )
initctl
bash: initctl: command not found
A Docker container is more analogous to a process than a VM. That process can spawn other processes though, and the sub-processes will run in the same container. A common pattern is to use a process supervisor like supervisord as described in the Docker documentation. In general though, it's usually recommended to try and run one process per container if you can (so that, for example, you can monitor and cap memory and CPU at the process level).

Resources