Docker container exits immediately upon invocation - docker

I am using docker version 18.09.0. The image is built without errors. Upon creating container from the image, the container runs and exits immediately with exit status 0, even though I use -it option. Here is Dockerfile.
FROM node:8.15-alpine
WORKDIR /usr/src/app
COPY package*.json ./
COPY middleware middleware
COPY hfc-key-store hfc-key-store
COPY app.js ./
RUN apk --no-cache --virtual build-dependencies add \
python \
make \
g++ \
&& npm install \
&& npm install -g forever
ENTRYPOINT ["forever", "start", "-l", "/logsBackEnd.txt", "--spinSleepTime", "10000", "app.js"]
Command to build image:
docker image build -t nid-api:1.0 .
Command to run container:
docker run -it nid-api:1.0

You need to run in detach mode using -d
There are two reason I can think of for container to exit.
If there is no service running inside container
In case the service is running and docker is running without any deattach option.
The first case seems to be of more related to your error. But always run container in deattached mode. By default new version of docker always run in deattached mode
Also try the below as well.
Docker container will automatically stop after "docker run -d"

The forever is running as a daemon inside the docker container and that may be the cause of making the container to exit immediately.
You can try to use dumb-init to start any process running in a docker container so that exit signals are handled correctly.
dumb-init enables you to simply prefix your command with dumb-init. It acts as PID 1 and immediately spawns your command as a child process, taking care to properly handle and forward signals as they are received.
dumb-init runs as PID 1, acting like a simple init system. It launches a single process and then proxies all received signals to a session rooted at that child process.
Since your actual process is no longer PID 1, when it receives signals from dumb-init, the default signal handlers will be applied, and your process will behave as you would expect. If your process dies, dumb-init will also die, taking care to clean up any other processes that might still remain.

Related

Testing an application that needs MySQL/MariaDB in Jenkins

This is likely a standard task, but I've spent a lot of time googling and prototyping this without success.
I want to set up CI for a Java application that needs a database (MySQL/MariaDB) for its tests. Basically, just a clean database where it can write to. I have decided to use Jenkins for this. I have managed to set up an environment where I can compile the application, but fail to provide it with a database.
What I have tried is to use a Docker image with Java and MariaDB. However, I run into problems starting MariaDB daemon, because at that point Jenkins already activates its user (UID 1000), which doesn't have permissions to start the daemon, which only the root user can do.
My Dockerfile:
FROM eclipse-temurin:17-jdk-focal
RUN apt-get update \
&& apt-get install -y git mariadb-client mariadb-server wget \
&& apt-get clean
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
The docker-entrypoint.sh is pretty trivial (and also chmod a+x'd, that's not the problem):
#! /bin/sh
service mysql start
exec "$#"
However, Jenkins fails with these messages:
$ docker run -t -d -u 1000:1001 [...] c8b472cda8b242e11e2d42c27001df616dbd9356 cat
$ docker top cbc373ea10653153a9fe76720c204e8c2fb5e2eb572ecbdbd7db28e1d42f122d -eo pid,comm
ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument, as required by official docker images (see https://github.com/docker-library/official-images#consistency for entrypoint consistency requirements).
Alternatively you can force image entrypoint to be disabled by adding option `--entrypoint=''`.
I have tried debugging this from the command line using the built Docker image c8b472cda8b. The problem is as described before: because Jenkins passes -u 1000:1001 to Docker, docker-entrypoint.sh script no longer runs as root and therefore fails to start the daemon. Somewhere in Docker or Jenkins the error is "eaten up" and not shown, but basically the end result is that mysqld doesn't run and also it doesn't get to exec "$#".
If I execute exactly the same command as Jenkins, but without -u ... argument, leaving me as root, then everything works fine.
I'm sure there must be a simple way to start the daemon and/or set this up somehow completely differently (external database?), but can't figure it out. I'm practically new to Docker and especially to Jenkins.
My suggestion is:
Run the docker build command without -u (as root)
Create Jenkins user inside the container (via Dockerfile)
At the end of the entrypoint.sh switch to jenkins user by su - jenkins
One disadvantage is that every time you enter the container you will be root user

How to execute start script in Dockerfile

I want to create new image from jdk, build it, it works; when I run my new imag, it return container id, but can't get the process-info when docker ps,this is my dockerfile:
# specified jdk version
FROM openjdk:7-jre
# env
ENV APP_HOME /usr/src/KOAL-OCSP
ENV PATH $APP_HOME:$PATH
# copy my app in .zip to /usr/src
COPY myapp.zip /usr/src/
# unzip copy file
RUN unzip /usr/src/myapp.zip
WORKDIR $APP_HOME
#port
expose 80
#run the setup script of my app when start container
CMD ["service.sh" "start"]
service.sh is a setup script is my app root-file, I wish the script can auto execuced when run the new self-build image.
I suspect that the container has executed and exited successfully. The container will stay alive as long as the processes that you have started using the services.sh script is still running.
In your case, the services.sh has executed and exited, thus causing the container to exit.
To view all containers, use docker ps -a
Update:
The error /bin/sh: 1: ./service.sh: not found indicates that the services.sh script is not found under $APP_HOME inside the Docker image. Make sure you add it under $APP_HOME using
ADD `service.sh` $APP_HOME
CMD ["service.sh" "start"]
The above is not valid json, it's missing a comma in the the array, so docker will execute it as a string which will fail since ["service.sh" will not be found as a command to run.
If you use docker ps -a you will see a list of all containers, including exited ones. From there, you can use docker logs $(docker ps -lq) to see the logs of the last container you tried to run. And you can docker inspect $(docker ps -lq) to see all the details about the last container it tried to run, including the exit code.
To get past your current error, correct your syntax with the missing comma:
CMD ["service.sh", "start"]
For the next problem you are seeing, a "not found" error can indicate:
The command doesn't exist inside your container (at the expected location). In your scenario, make sure it is included in /usr/src/KOAL-OCSP that you unzip in your image.
The shell script does exist, but calls a binary on the first line that doesn't exist in your image. E.g. if you call #!/bin/bash but only have /bin/sh in your container. This also happens when you edit the files on a Windows system and have ^M linefeeds that become part of the name of the binary that the container is looking for (/bin/sh^M instead of /bin/sh).
For binaries, this can happen if executable you are running has library dependencies that do not exist inside your container. For example, if you build on a glibc environment and run the container with a musl libc environment of Alpine, this same error message will appear.

docker build how to run intermediate containers with centos:systemd

I am trying to build a docker image that is based on centos:systemd. In my Dockerfile I am running a command that depends on systemd running, this fails with the following error:
Failed to get D-Bus connection: Operation not permitted
error: %pre(mod-php-7.1-apache2-zend-server-7.1.7-16.x86_64) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package mod-php-7.1-apache2-zend-server-7.1.7-16.x86_64
how can I get the intermediate containers to run with --privileged and mapping -v /sys/fs/cgroup:/sys/fs/cgroup:ro ?
If I comment out the installer and just run the container and manually execute the installer it works fine.
Here is the Dockerfile
FROM centos/systemd
COPY ./ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz /opt
RUN tar -xvf /opt/ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz -C /opt/
RUN /opt/ZendServer-RepositoryInstaller-linux/install_zs.sh 7.1 java --automatic
If your installer needs systemd running, I think you will need to launch a container with the base centos/systemd image, manually run the commands, and then save the result using docker commit. The base image ENTRYPOINT and CMD are not run while child images are getting built, but they do run if you launch a container and make your changes. After manually executing the installer, run docker commit {my_intermediate_container} {my_image}:{my_version}, replacing the bits in curly braces with the container name/hash, your desired image name, and image version.
You might also be able to change your Dockerfile to launch init before running your installer. I am not sure if that will work here in the context of building an image, but that would look like:
FROM centos/systemd
COPY ./ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz /opt
RUN tar -xvf /opt/ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz -C /opt/ \
&& /usr/sbin/init \
&& /opt/ZendServer-RepositoryInstaller-linux/install_zs.sh 7.1 java --automatic
A LAMP stack inside a docker container does not need systemd - I have made to work with the docker-systemctl-replacement script. It is able to start and stop a service according to what's written in the *.service file. You could try it with what the ZendServer is normally doing outside a docker container.

Why does container does't execute scripts inside /etc/my_init.d/ on startup?

I have the following Dockerfile:
FROM phusion/baseimage:0.9.16
RUN mv /build/conf/ssh-setup.sh /etc/my_init.d/ssh-setup.sh
EXPOSE 80 22
CMD ["node", "server.js"]
My /build/conf/ssh-setup.sh looks like the following:
#!/bin/sh
set -e
echo "${SSH_PUBKEY}" >> /var/www/.ssh/authorized_keys
chown www-data:www-data -R /var/www/.ssh
chmod go-rwx -R /var/www/.ssh
It just adds SSH_PUBKEY env to /var/www/.ssh/authorized_keys to enable ssh access.
I run my container just like the following:
docker run -d -p 192.168.99.100:80:80 -p 192.168.99.100:2222:22 \
-e SSH_PUBKEY="$(cat ~/.ssh/id_rsa.pub)" \
--name dev hub.core.test/dev
My container starts fine but unfortunately /etc/my_init.d/ssh-setup.sh script does't get executed and I'm unable to ssh my container.
Could you help me what is the reason why /var/www/.ssh/authorized_keys doesn't get executed on starting of my container?
I had a pretty similar issue, also using phusion/baseimage. It turned out that my start script needed to be executable, e.g.
RUN chmod +x /etc/my_init.d/ssh-setup.sh
Note:
I noticed you're not using baseimage's init system ( maybe on purpose? ). But, from my understanding of their manifesto, doing that forgoes their whole "a better init system" approach.
My understanding is that they want you to, in your case, move your start command of node server.js to a script within my_init.d, e.g. /etc/my_init.d/start.sh and in your dockerfile use their init system instead as the start command, e.g.
FROM phusion/baseimage:0.9.16
RUN mv /build/conf/start.sh /etc/my_init.d/start.sh
RUN mv /build/conf/ssh-setup.sh /etc/my_init.d/ssh-setup.sh
RUN chmod +x /etc/my_init.d/start.sh
RUN chmod +x /etc/my_init.d/ssh-setup.sh
EXPOSE 80 22
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
That'll start baseimage's init system, which will then go and look in your /etc/my_init.d/ and execute all the scripts in there in alphabetical order. And, of course, they should all be executable.
My references for this are: Running start scripts and Getting Started.
As the previous answer states you did not execute ssh-setup.sh. You can only have one process in a Docker container (that is a lie, but it will do for now). Why not run ssh-setup.sh as your CMD/ENTRYPOINT process and have ssh-setup.sh exec into your final command, i.e.
exec node server.js
Or cleaner, have a script, like boot.sh, which runs any init scripts, like ssh-setup.sh, then execs to node.
Because you didn't invoke /etc/my_init.d/ssh-setup.sh when you started your container.
you should call it in CMD or ENTRYPOINT, read more here
RUN executes command(s) in a new layer and creates a new image. E.g.,
it is often used for installing software packages.
CMD sets default
command and/or parameters, which can be overwritten from command line
when docker container runs.
ENTRYPOINT configures a container that
will run as an executable.

frequent restart - docker containers in marathon/mesos

I have been successful till completely dockerizing my webserver application. Now I want to explore more by deploying them directly to a mesos slave through marathon framework.
I can deploy a docker container in to a marathon in two different approaches , either command line or through marathon web UI.
Both worked for me but challenge is when I am trying to deploy my docker image, marathon frequently restarting a job and in mesos UI page I can see many finished job for the same container. Close to 10 tasks per minute. Which is not expected I believe.
My docker file looks like below:
FROM ubuntu:latest
#---------- file Author / Maintainer
MAINTAINER "abc"
#---------- update the repository sources list
RUN apt-get update && apt-get install -y \
apache2 \
curl \
openssl \
php5 \
php5-mcrypt \
unzip
#--------- installing composer
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer
RUN a2enmod rewrite
#--------- modifying the 000default file
COPY ./ /var/www/airavata-php-gateway
WORKDIR /etc/apache2/sites-available/
RUN sed -i 's/<\/VirtualHost>/<Directory "\/var\/www"> \n AllowOverride All \n <\/Directory> \n <\/VirtualHost>/g' 000-default.conf
RUN sed -i 's/DocumentRoot \/var\/www\/html/DocumentRoot \/var\/www/g' 000-default.conf
WORKDIR /etc/php5/mods-available/
RUN sed -i 's/extension=mcrypt.so/extension=\/usr\/lib\/php5\/20121212\/mcrypt.so/g' mcrypt.ini
WORKDIR /var/www/airavata-php-gateway/
RUN php5enmod mcrypt
#--------- making storage folder writable
RUN chmod -R 777 /var/www/airavata-php-gateway/app/storage
#-------- starting command
CMD ["sh", "-c", "sh pga-setup.sh ; service apache2 restart ; /bin/bash"]
#--------- exposing apache to default port
EXPOSE 80
Now I am clueless how to resolve this issue,any guidance will be highly appreciated.
Thanks
Marathon is meant to run long-running tasks. So in your case, if you start a Docker container that does not keep listening on a specific port, meaning it exits successfully or unsuccessfully, Marathon will start it again.
For example, I started a Docker container using the simplest image hello-world. That generated more than 10 processes in Mesos UI in a matter of seconds! This was expected. Code inside Docker container was executing successfully and exiting normally. And since it exited, Marathon made sure that another instance of the app was started immediately.
On the other hand, when I start an nginx container which keeps listening on port 80, it becomes a long running task and a new task (Docker container) is spun up only when the existing container exits (successfully or unsuccessfully).
You probably need to work on the CMD section of your Dockerfile. Does the container in question keep running when started normally? That is, without Marathon - just using plain docker run? If yes, check if it keeps running in detached mode - docker run -d. If it exits, then CMD is the part you need to work on.

Resources