docker run behavior with commands in different order - docker

I'm trying to figure out the way docker handles commands presented to it.
For example if I run this the JS app starts fine.
docker run ...name etc.. /bin/bash -c "cd /video_recordings/voddirectory; pm2 start app.js; pm2 startup; stunnel; nginx;"
However, If I do it like this in a different order
"stunnel; nginx; cd /video_recordings/voddirectory; pm2 start app.js; pm2 startup;"
The JS app does not run.
What behavior is docker looking for to continue to the next command?
Similarly if I use in my docker file:
ENTRYPOINT stunnel && nginx -g 'daemon off;' && bash
and then do a
docker run ...name etc.. /bin/bash -c "cd /video_recordings/voddirectory; pm2 start app.js; pm2 startup;"
The JS app never runs.

Either && or ; between command, shell will execute in order. So, the first command needs to finish first and then the subsequent command run.
BUT you call nginx -g 'daemon off;' will make it run in the foreground. Therefore, it is never finished running. The commands follows won't run.
However, I am still not sure why stunnel; nginx; cd /video_recordings/voddirectory; pm2 start app.js; pm2 startup; does not run as the normal behaviour of Nginx should go background.

Related

How to run two commands on Dockerfile?

I have to execute two commands on the docker file, but both these commands are attached to the terminal and block the execution from the next.
dockerfile:
FROM sinet/nginx-node:latest
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
RUN git clone https://name:pass#bitbucket.org/joaocromg/front-web-alferes.git
WORKDIR /usr/src/app/front-web-alferes
RUN npm install
RUN npm install bower -g
RUN npm install gulp -g
RUN bower install --allow-root
COPY default.conf /etc/nginx/conf.d/
RUN nginx -g 'daemon off;' & # command 1 blocking
CMD ["gulp watch-dev"] # command 2 not executed
Someone know how can I solve this?
Try creating a script like this:
#!/bin/sh
nginx -g 'daemon off;' &
gulp watch-dev
And then execute it in your CMD:
CMD /bin/my-script.sh
Also, notice your last line would not have worked:
CMD ["gulp watch-dev"]
It needed to be either:
CMD gulp watch-dev
or:
CMD ["gulp", "watch-dev"]
Also, notice that RUN is for executing a command that will change your image state (like RUN apt install curl), not for executing a program that needs to be running when you run your container. From the docs:
The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.
I suggest you try supervisord in this case. http://supervisord.org/
Edit: Here is an dockerized example of httpd and ssh daemon: https://riptutorial.com/docker/example/14132/dockerfile-plus-supervisord-conf
The answer here is that RUN nginx -g 'daemon off;' is intentionally starting nginx in the foreground, which is blocking your second command. This command is intended to start docker containers with this as the foreground process. Running RUN nginx will start nginx, create the master and child nodes and (hopefully) exit with a zero status code. Although as mentioned above, this is not the intended use of run, so a bash script would work best in this case.

Docker container exits immediately upon invocation

I am using docker version 18.09.0. The image is built without errors. Upon creating container from the image, the container runs and exits immediately with exit status 0, even though I use -it option. Here is Dockerfile.
FROM node:8.15-alpine
WORKDIR /usr/src/app
COPY package*.json ./
COPY middleware middleware
COPY hfc-key-store hfc-key-store
COPY app.js ./
RUN apk --no-cache --virtual build-dependencies add \
python \
make \
g++ \
&& npm install \
&& npm install -g forever
ENTRYPOINT ["forever", "start", "-l", "/logsBackEnd.txt", "--spinSleepTime", "10000", "app.js"]
Command to build image:
docker image build -t nid-api:1.0 .
Command to run container:
docker run -it nid-api:1.0
You need to run in detach mode using -d
There are two reason I can think of for container to exit.
If there is no service running inside container
In case the service is running and docker is running without any deattach option.
The first case seems to be of more related to your error. But always run container in deattached mode. By default new version of docker always run in deattached mode
Also try the below as well.
Docker container will automatically stop after "docker run -d"
The forever is running as a daemon inside the docker container and that may be the cause of making the container to exit immediately.
You can try to use dumb-init to start any process running in a docker container so that exit signals are handled correctly.
dumb-init enables you to simply prefix your command with dumb-init. It acts as PID 1 and immediately spawns your command as a child process, taking care to properly handle and forward signals as they are received.
dumb-init runs as PID 1, acting like a simple init system. It launches a single process and then proxies all received signals to a session rooted at that child process.
Since your actual process is no longer PID 1, when it receives signals from dumb-init, the default signal handlers will be applied, and your process will behave as you would expect. If your process dies, dumb-init will also die, taking care to clean up any other processes that might still remain.

Docker define more than one default command in Dockerfile

If I have a Docker file that has at the end:
ENTRYPOINT /bin/bash
and run the container via docker run and type in the terminal
gulp
that gives me running gulp that I can easily terminate with Ctrl+C
but when I put gulp as default command to Dockerfile this way:
CMD ["/bin/bash", "-c", "gulp"]
or this:
ENTRYPOINT ["/bin/bash", "-c", "gulp"]
then when I run container via docker run the gulp is running but I can't terminate it via Ctrl+C hotkey.
The Dockerfile I used to build the image:
FROM node:8
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y libltdl-dev
WORKDIR /home/workspace
RUN npm install gulp -g
#works but cant kill gulp with Ctrl+C
#CMD ["/bin/bash", "-c", "gulp"]
#works but cant kill gulp with Ctrl+C
#ENTRYPOINT ["/bin/bash", "-c", "gulp"]
# need to type command gulp in cli to run it
# but I'm able to terminate gulp with Ctrl+C
ENTRYPOINT /bin/bash
It makes sense to me I can't terminate the default command for the container that is defined in Dockerfile because there would be no other command that could run once I terminate the default.
How can I state in Dockerfile that I want to run /bin/bash as default and on top of that gulp so If I terminate gulp I'll be switched back to the bash command line prompt?
Since gulp is a build tool, you'd generally run it in the course of building your container, not while you're starting it. Your Dockerfile might look roughly like
FROM node:8
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
RUN gulp
CMD yarn run start
When you run docker build, along the way it will print out things like
---> b35f4035db3f
Step 6/7 : RUN gulp
---> Running in 02071fceb21b
The important thing is that the last hex string that gets printed out in each step (the line before each Dockerfile command) is a valid Docker image ID. If your build goes wrong, you can
host$ sudo docker run --rm -it b35f4035db3f bash
root#38ed4261ab0f:/app# gulp
Once you've finished debugging the issue, you can check whatever fixes into your source control system, and rebuild the image as needed later.
Gulp is build tool you need to install using run command. This will commit the changes on top of your base image.
If you want to use it as a default command either using ENTRYPOINT or CMD in your dockerfile, then you can definitely not kill it with a ctrl+c since it is not a shell process, but in fact a container that you are running.
If in case you have your dockerfile an ENTRYPOINT. You can stop the container using docker stop.
NOTE: A container cannot be killed using ctrl+c, it needs to be stopped via:docker stop container_name

Why does container does't execute scripts inside /etc/my_init.d/ on startup?

I have the following Dockerfile:
FROM phusion/baseimage:0.9.16
RUN mv /build/conf/ssh-setup.sh /etc/my_init.d/ssh-setup.sh
EXPOSE 80 22
CMD ["node", "server.js"]
My /build/conf/ssh-setup.sh looks like the following:
#!/bin/sh
set -e
echo "${SSH_PUBKEY}" >> /var/www/.ssh/authorized_keys
chown www-data:www-data -R /var/www/.ssh
chmod go-rwx -R /var/www/.ssh
It just adds SSH_PUBKEY env to /var/www/.ssh/authorized_keys to enable ssh access.
I run my container just like the following:
docker run -d -p 192.168.99.100:80:80 -p 192.168.99.100:2222:22 \
-e SSH_PUBKEY="$(cat ~/.ssh/id_rsa.pub)" \
--name dev hub.core.test/dev
My container starts fine but unfortunately /etc/my_init.d/ssh-setup.sh script does't get executed and I'm unable to ssh my container.
Could you help me what is the reason why /var/www/.ssh/authorized_keys doesn't get executed on starting of my container?
I had a pretty similar issue, also using phusion/baseimage. It turned out that my start script needed to be executable, e.g.
RUN chmod +x /etc/my_init.d/ssh-setup.sh
Note:
I noticed you're not using baseimage's init system ( maybe on purpose? ). But, from my understanding of their manifesto, doing that forgoes their whole "a better init system" approach.
My understanding is that they want you to, in your case, move your start command of node server.js to a script within my_init.d, e.g. /etc/my_init.d/start.sh and in your dockerfile use their init system instead as the start command, e.g.
FROM phusion/baseimage:0.9.16
RUN mv /build/conf/start.sh /etc/my_init.d/start.sh
RUN mv /build/conf/ssh-setup.sh /etc/my_init.d/ssh-setup.sh
RUN chmod +x /etc/my_init.d/start.sh
RUN chmod +x /etc/my_init.d/ssh-setup.sh
EXPOSE 80 22
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
That'll start baseimage's init system, which will then go and look in your /etc/my_init.d/ and execute all the scripts in there in alphabetical order. And, of course, they should all be executable.
My references for this are: Running start scripts and Getting Started.
As the previous answer states you did not execute ssh-setup.sh. You can only have one process in a Docker container (that is a lie, but it will do for now). Why not run ssh-setup.sh as your CMD/ENTRYPOINT process and have ssh-setup.sh exec into your final command, i.e.
exec node server.js
Or cleaner, have a script, like boot.sh, which runs any init scripts, like ssh-setup.sh, then execs to node.
Because you didn't invoke /etc/my_init.d/ssh-setup.sh when you started your container.
you should call it in CMD or ENTRYPOINT, read more here
RUN executes command(s) in a new layer and creates a new image. E.g.,
it is often used for installing software packages.
CMD sets default
command and/or parameters, which can be overwritten from command line
when docker container runs.
ENTRYPOINT configures a container that
will run as an executable.

frequent restart - docker containers in marathon/mesos

I have been successful till completely dockerizing my webserver application. Now I want to explore more by deploying them directly to a mesos slave through marathon framework.
I can deploy a docker container in to a marathon in two different approaches , either command line or through marathon web UI.
Both worked for me but challenge is when I am trying to deploy my docker image, marathon frequently restarting a job and in mesos UI page I can see many finished job for the same container. Close to 10 tasks per minute. Which is not expected I believe.
My docker file looks like below:
FROM ubuntu:latest
#---------- file Author / Maintainer
MAINTAINER "abc"
#---------- update the repository sources list
RUN apt-get update && apt-get install -y \
apache2 \
curl \
openssl \
php5 \
php5-mcrypt \
unzip
#--------- installing composer
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer
RUN a2enmod rewrite
#--------- modifying the 000default file
COPY ./ /var/www/airavata-php-gateway
WORKDIR /etc/apache2/sites-available/
RUN sed -i 's/<\/VirtualHost>/<Directory "\/var\/www"> \n AllowOverride All \n <\/Directory> \n <\/VirtualHost>/g' 000-default.conf
RUN sed -i 's/DocumentRoot \/var\/www\/html/DocumentRoot \/var\/www/g' 000-default.conf
WORKDIR /etc/php5/mods-available/
RUN sed -i 's/extension=mcrypt.so/extension=\/usr\/lib\/php5\/20121212\/mcrypt.so/g' mcrypt.ini
WORKDIR /var/www/airavata-php-gateway/
RUN php5enmod mcrypt
#--------- making storage folder writable
RUN chmod -R 777 /var/www/airavata-php-gateway/app/storage
#-------- starting command
CMD ["sh", "-c", "sh pga-setup.sh ; service apache2 restart ; /bin/bash"]
#--------- exposing apache to default port
EXPOSE 80
Now I am clueless how to resolve this issue,any guidance will be highly appreciated.
Thanks
Marathon is meant to run long-running tasks. So in your case, if you start a Docker container that does not keep listening on a specific port, meaning it exits successfully or unsuccessfully, Marathon will start it again.
For example, I started a Docker container using the simplest image hello-world. That generated more than 10 processes in Mesos UI in a matter of seconds! This was expected. Code inside Docker container was executing successfully and exiting normally. And since it exited, Marathon made sure that another instance of the app was started immediately.
On the other hand, when I start an nginx container which keeps listening on port 80, it becomes a long running task and a new task (Docker container) is spun up only when the existing container exits (successfully or unsuccessfully).
You probably need to work on the CMD section of your Dockerfile. Does the container in question keep running when started normally? That is, without Marathon - just using plain docker run? If yes, check if it keeps running in detached mode - docker run -d. If it exits, then CMD is the part you need to work on.

Resources