Docker - Process not starting at boot - docker

I am containerizing the latest version of grafana and want to start the grafana-process when the container starts and then use it in my K8S (kubernetes) cluster.
My Dockerfile looks like :
FROM armdocker/baseimages/rhel:7-20161207
MAINTAINER xxxxxxxx
ENV GRAFANA_VERSION_MAJOR=4 GRAFANA_VERSION_MINOR=4 GRAFANA_VERSION_PATCH=3-1
ENV GRAFANA_VERSION=${GRAFANA_VERSION_MAJOR}.${GRAFANA_VERSION_MINOR}.${GRAFANA_VERSION_PATCH}
RUN yum clean all && yum install -y unzip tar
RUN curl -f -L -o grafana-${GRAFANA_VERSION}.x86_64.rpm https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-${GRAFANA_VERSION}.x86_64.rpm && \
yum localinstall grafana-${GRAFANA_VERSION}.x86_64.rpm -y
EXPOSE 3000
ENTRYPOINT ["/etc/init.d/grafana-server start"]
Building the Dockerfile is successful and returns no errors.
When I try to run this image, I get the ERROR.
docker run -dit -p 3000:3000 armdocker/proj/grafana:1.0.5
471b2acb964caad69bbb78831a59ee9d2b27997911b5b104b0057ddc957d1101
Error response from daemon: Cannot start container 471b2acb964caad69bbb78831a59ee9d2b27997911b5b104b0057ddc957d1101: [8] System error: exec: "/etc/init.d/grafana-server start": stat /etc/init.d/grafana-server start: no such file or directory
This seems to be very weird since I am installing the RPM first (which makes the file /etc/init.d/grafana-server ) and then I am trying to start the process as my ENTRYPOINT
I then tried
CMD ["/etc/init.d/grafana-server start"]
This also results in the same ERROR /etc/init.d/grafana-server start: no such file or directory
I then tried using the systemctl command :
docker run -dit -p 3000:3000 armdocker/proj/grafana:1.0.6
bfd492c75a0f4c284fc0fdbd5a590f0155f6f67bcb4834e144f344bb789546f3
Error response from daemon: Cannot start container bfd492c75a0f4c284fc0fdbd5a590f0155f6f67bcb4834e144f344bb789546f3: [8] System error: exec: "/bin/systemctl start grafana-server.service": stat /bin/systemctl start grafana-server.service: no such file or directory
I am out of ideas as to what am I doing wrong to have a container with a started grafana process.

Unless you're running your own systemd daemon inside of the container (I don't recommend this, it creates lots of issues), you shouldn't be trying to start the process with a systemctl or /etc/init.d command. Containers are not a VM, they are a method to run an application within their own namespace. And when that application exits, so do your container. When your application is something like a systemctl start command, your container will exit the moment that systemctl command returns, which isn't useful it you were hopping it would stay up for the duration of the grafana process running.
Rather than trying to reinvent the wheel, I'd recommend you look at how grafana themselves packages their docker container. Specifically their run.sh ends with:
exec gosu grafana /usr/sbin/grafana-server \
--homepath=/usr/share/grafana \
--config=/etc/grafana/grafana.ini \
cfg:default.log.mode="console" \
cfg:default.paths.data="$GF_PATHS_DATA" \
cfg:default.paths.logs="$GF_PATHS_LOGS" \
cfg:default.paths.plugins="$GF_PATHS_PLUGINS" \
"$#"
Their repo is over at https://github.com/grafana/grafana-docker

As an alternative you could use the docker-systemctl-replacement script and register it as the main CMD of the image. It will check out the *.service scripts to know how to start and stop a service (without the help of a systemd daemon). So if the Grafana guys change their startup scenario then your builds will continue to work. ;)

Related

"systemctl" command doesn't work on centos with docker

I use docker with centos 8.
How can i use systemctl command in dockerfile please ?
When i install an app it needs systemctl.
I have an error:
System has not been booted with systemd as init system (PID 1). Can't
operate. Failed to connect to bus: Host is down
I build docker like this:
docker build -t myapp:11 .
Same when i try in container:
docker run -it --privileged app:11 /bin/bash
Thank you.
docker build -t nuance:11 .
docker run -it --cap-add=NET_ADMIN nuance:11 /bin/bash
# syntax=docker/dockerfile:1
FROM centos:latest
USER root
RUN cd /etc/yum.repos.d/
RUN sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
RUN sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
RUN yum -y update && \
yum clean all
RUN yum -y install \
java-11-openjdk-devel \
perl-Data-Dumper \
redhat-lsb-core.x86_64 \
glibc.x86_64 \
glibc.i686 \
libstdc++.x86_64 \
libstdc++.i686 \
openssl \
libgcc \
libgcc.i686 \
libaio.x86_64 \
libaio.i686 \
libnsl.i686 \
ncurses-libs \
httpd.x86_64 \
unzip \
-x postfix \
-x mariadb-libs \
zlib.i686 \
zlib.x86_64
WORKDIR /tmp
COPY Nuance_Speech_Suite-11.0.10-x86_64-linux.tgz ./Nuance_Speech_Suite-11.0.10-x86_64-linux.tgz
COPY NRec-fr-FR-10.0.0-10.1.0.i686-linux.tar.gz ./languages/NRec-fr-FR-10.0.0-10.1.0.i686-linux.tar.gz
COPY NVE_fr_FR_audrey-ml_xpremium-2.1.0_linux.zip ./languages/NVE_fr_FR_audrey-ml_xpremium-2.1.0_linux.zip
COPY NRec-fr-FR-10.0.0-10.1.0-CumulativePatch-1_linux.zip ./languages/NRec-fr-FR-10.0.0-10.1.0-CumulativePatch-1_linux.zip
COPY NRec-fr-FR-10.0.0-10.1.0-CumulativePatch-2_linux.zip ./languages/NRec-fr-FR-10.0.0-10.1.0-CumulativePatch-2_linux.zip
COPY nuance.lic ./nuance.lic
RUN tar -zxf Nuance_Speech_Suite-11.0.10-x86_64-linux.tgz
RUN tar -zxf languages/NRec-fr-FR-10.0.0-10.1.0.i686-linux.tar.gz
RUN unzip languages/NVE_fr_FR_audrey-ml_xpremium-2.1.0_linux.zip
RUN unzip languages/NRec-fr-FR-10.0.0-10.1.0-CumulativePatch-1_linux.zip
RUN unzip languages/NRec-fr-FR-10.0.0-10.1.0-CumulativePatch-2_linux.zip
WORKDIR /tmp/Nuance_Speech_Suite-11.0.10
RUN ./setup.sh -s -f "/tmp/nuance.lic" -j "/usr/lib/jvm/java-11-openjdk" -V "/tmp/languages" -I "NLM,NSS"
last lines of log
2022-12-16 09:22:11 setup.sh: info: Restarting the Nuance License Manager service
2022-12-16 09:22:11 setup.sh: info: starting command 'systemctl restart nuance-licmgr'; output sent to log
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
2022-12-16 09:22:11 setup.sh: info: Command 'systemctl restart nuance-licmgr' returned 1
2022-12-16 09:22:11 setup.sh: error: install_postprocessing_nlm_startservices() failed to start services
2022-12-16 09:22:11 setup.sh: info: skipping invocation of install_postprocessing_nms() due to previous post processing errors
2022-12-16 09:22:11 setup.sh: info: Skipping install_execute_installsuite due to previous errors
You can't run systemctl in a Dockerfile at all. More broadly, commands like systemctl or service don't work well in Docker, and you should restructure your container to avoid them.
For systemctl more specifically, it tries to connect to the systemd daemon. In a Dockerfile, each RUN step occurs in a new container, and like other containers, that container only runs the one RUN command; it does not run systemd or any other typical Linux daemons. Furthermore, at the end of the RUN line, the filesystem is persisted but any other changes are lost, so even if you systemctl start something successfully, the image won't contain a running process.
More generally I'd recommend avoiding systemd in Docker. A minimal init system like tini can be a good idea for some problems like reaping zombie processes; if you must run multiple processes in one container and really can't refactor it then supervisord can fill this need. A typical systemd installation will want to configure kernel parameters, start terminal logins, mount filesystems, and configure the network, all of which are basically impossible in Docker; it will capture the main process's stdout so docker logs doesn't work.
Aim for your container to only have one process. Don't run an init system at all if you don't need to. Don't try to "start a service", just run the program you're trying to build in the foreground as the one thing the container does.
FROM some-base-image
RUN a command to install the software
CMD the_program
# with no `systemctl` anywhere
if you are facing following error when running docker -
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'
or
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
Run following commands
$ sudo systemctl status docker
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
The reason is that you are trying to use systemd command to manage services on Linux but your system doesn't use systemd and (most likely) using the classic SysV init (sysvinit) system.
run following command to confirm if its above case
$ ps -p 1 -o comm=
init
so now you check again the status using
$ sudo service docker status
* Docker is not running
you can start docker using the following command
sudo service docker start
* Starting Docker: docker
for more detail pls refer following link
https://linuxhandbook.com/system-has-not-been-booted-with-systemd/
Systemd command
Sysvinit command
systemctl start service_name
service service_name start
systemctl stop service_name
service service_name stop
systemctl restart service_name
service service_name restart
systemctl status service_name
service service_name status
systemctl enable service_name
chkconfig service_name on
systemctl disable service_name
chkconfig service_name off

Run a command after the docker container has started running

Trying to run sendmailconfig after my PHP FPM (7.1-fpm) docker has started, but i'm having a hard time doing so without getting in the way of the FPM part of the container.
FROM php:7.1-fpm
RUN apt-get update && apt-get install
CMD "/usr/local/bin/config.sh" && /bin/bash
I've tried making a script that purely executes yes | sendmailconfig but seems to stop the image's default script from running which causes PHP-FPM to never actually run.
The reason I want this done in the image is because I have to run the sendmailconfig command every time I restart the container, which is impractical when managing multiple docker stacks.
Set your entrypoint to run a file you've copied in, that file should have something like the following in it
/usr/local/bin/config.sh
# If this isn't the correct command for you to start php-fpm look up the correct one for your image
sudo service php7.1-fpm start
# Execute the CMD passed in from the dockerfile
sudo -H bash -c "$#;"
# You'll probably be ok with just `bash -c "$#;"` if you don't have sudo installed

Docker container is not running even if -d

I'm french and new here (so I don't know how stack overflow works, his community) I'm gonna try to adapt myself.
So, my first problem is the following :
I run docker container with my image who it created with Dockerfile. (there is DNS container)
In Dockerfile, this container have to start script.sh when it start.
But after use that :
docker run -d -ti -p 53:53 alex/dns
(Use -p 53:53 because DNS.)
I can see my DNS runing at the end of my script.sh but, when I do :
Docker ps -a ; but > container is not running.
I'm novice with docker. I have started to learn it 2days ago.
I tried to add (one by one of course):
CMD ["bash"]
CMD ["/bin/bash"]
to run bash and make sure that does not poweroff.
I tried to add -d in Docker run command
I tried to use :
docker commit ti alex/dns
and
docker exec -ti alex/dns /bin/bsh
My dockerfile file :
FROM debian
...
RUN apt-get install bind9
...
ADD script.sh /usr/bin/script.sh
...
ENTRYPOINT ["/bin/bash", "script.sh]
CMD ["/bin/bash"]
My file script.sh :
service bind9 stop
*It copy en remplace conf file for bind9*
service bind9 restart
I hope that there are not too many mistakes and that I managed to make myself understood
I expect the DNS container stay runing and can use it with docker exec.
But now, after use docker run, the container start en stop juste after my script finish. Yes, the DNS server is runing the container tell me before close [ok] Bind9 running or somthing like that. But after container stop.
I suspect the problem you're facing is that your container will terminate once service bind9 restart completes.
You need to have a foreground process running to keep the container running.
I'm unfamiliar with bind9 but I recommend you explore ways to run bind9 in the foreground in your container.
Your command to run the container is correct:
docker run -d -ti -p 53:53 alex/dns
You may need to:
RUN apt-get update && apt-get -y install bind9
You will likely need something like (don't know):
ENTRYPOINT ["/bind9"]
Googled it ;-)
https://manpages.debian.org/jessie/bind9/named.8.en.html
After you've configured it, you can run it as a foreground process:
ENTRYPOINT ["named","-g"]

Docker Startup Multiple service is not working

Dockerfile
FROM drupal
RUN apt-get update
RUN apt-get install openssh-server -y
RUN apt-get install -y supervisor
#SS Related Fix : https://github.com/Microsoft/WSL/issues/3621
RUN mkdir -p /run/sshd
# SS Access Configuration
RUN echo "root:Docker!" | chpasswd
#Project Uplaod
RUN rm -rf /var/www/html/*
COPY ./html/ /var/www/html/
# Startup Configuration
COPY servername.conf /etc/apache2/conf-enabled/servername.conf
ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
Start Command : docker -D run -p 80:80 -p 2222:22 -it /bin/bash
[supervisord]
nodaemon=true
[program:SSH]
command=/usr/sbin/sshd start
[program:Apache]
command=/etc/init.d/apache2 start
when i jump into Shell and run that command it works but when i start container its not starting up the web server.
As standing in documentation
To start supervisord, run $BINDIR/supervisord. The resulting process
will daemonize itself and detach from the terminal. It keeps an
operations log at $CWD/supervisor.log by default.
You may start the supervisord executable in the foreground by passing
the -n flag on its command line. This is useful to debug startup
problems.
So systemd detach from main process what means for docker that process ended - exit container. To solve your problem you need to change CMD section to
CMD ["/usr/bin/supervisord", "-n"]
When you run
docker -D run -p 80:80 -p 2222:22 -it /bin/bash
The last part of the command, /bin/bash, replaces the CMD in the Dockerfile, so you only get the GNU bash shell. You should remove that part of the line and the standard command from your image will run.
You might consider how much you actually need an interactive shell in your Docker environment. Most application images are set up to run totally on their own without manual setup steps; compare the stock mysql or nginx images, for instance, which don't include any kind of remote login system. Also consider that anyone who can run docker history can now trivially find out your root password, and you have no way to manage the sshd host keys. I'd suggest removing this entire supervisord/sshd system and just packaging your application.

What should I do if exposing ports in a Dockerfile do not take effect?

I have the following Dockerfile to run an Nginx server but I can't seem to get Docker to expose port 80 thru my host machine so I can access it externally:
FROM ubuntu:latest
EXPOSE 80
RUN apt-get update
RUN apt-get -y install apt-utils
RUN apt-get -y dist-upgrade
RUN apt-get -y install nginx
CMD service nginx start
If I run the following command after building the image, docker run -p 80:80 -d nginxserver, I can get the correct global settings to take effect, however my newly created Docker container does not run persistently and it exits after a brief second.
If I try docker run -it /bin/bash -d nignxserver, this will allow my Docker container to work, however I won't be able to connect to the Nginx server outside the host machine.
If I try docker run -p 80:80 -it /bin/bash -d nignxserver, this will fail with the following error message:
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:348: starting container process caused "exec:
\"-it\": executable file not found in $PATH": unknown.
What would be the correct solution here?
The best solution is just to use the standard nginx image, if you're not really going to customize the image at all.
If you're writing a custom image, you should broadly assume commands like service just don't work. The CMD of the image you show (assuming it's successful) attempts to launch nginx as a background service; once it's started in the background, the container's main process has finished and the container exits. The CMD generally needs to launch the single process that the container runs in the foreground.
In terms of your various docker run gyrations, the options always come in the same order:
docker run \
-d -p 80:80 \ # docker-specific options
nginxserver \ # the image name
nginx -g 'daemon off;' # the command to run and its options
If you specify an alternate command (like /bin/bash) that runs instead of the main container process, and if the container normally would have run a network server, you get the shell instead. /bin/bash is a command and not an argument to -it; the same breakdown would be
docker run \
--rm -i -t \ # docker-specific options
nginxserver \ # the image name
/bin/bash # the command to run and its options
You need to run with --privileged to listen on ports under 1024.
Also, service nginx start exist immediately (This is covered by David Maze)
You should instead use CMD nginx

Resources