I have to execute two commands on the docker file, but both these commands are attached to the terminal and block the execution from the next.
dockerfile:
FROM sinet/nginx-node:latest
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
RUN git clone https://name:pass#bitbucket.org/joaocromg/front-web-alferes.git
WORKDIR /usr/src/app/front-web-alferes
RUN npm install
RUN npm install bower -g
RUN npm install gulp -g
RUN bower install --allow-root
COPY default.conf /etc/nginx/conf.d/
RUN nginx -g 'daemon off;' & # command 1 blocking
CMD ["gulp watch-dev"] # command 2 not executed
Someone know how can I solve this?
Try creating a script like this:
#!/bin/sh
nginx -g 'daemon off;' &
gulp watch-dev
And then execute it in your CMD:
CMD /bin/my-script.sh
Also, notice your last line would not have worked:
CMD ["gulp watch-dev"]
It needed to be either:
CMD gulp watch-dev
or:
CMD ["gulp", "watch-dev"]
Also, notice that RUN is for executing a command that will change your image state (like RUN apt install curl), not for executing a program that needs to be running when you run your container. From the docs:
The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.
I suggest you try supervisord in this case. http://supervisord.org/
Edit: Here is an dockerized example of httpd and ssh daemon: https://riptutorial.com/docker/example/14132/dockerfile-plus-supervisord-conf
The answer here is that RUN nginx -g 'daemon off;' is intentionally starting nginx in the foreground, which is blocking your second command. This command is intended to start docker containers with this as the foreground process. Running RUN nginx will start nginx, create the master and child nodes and (hopefully) exit with a zero status code. Although as mentioned above, this is not the intended use of run, so a bash script would work best in this case.
Related
I'm a bit new to docker and it's the first time I'm trying to add healthcheck.
The docker application I'm using is the example from here:
https://docs.docker.com/get-started/02_our_app/
I simply followed the steps to get a container with a service that runs locally on port 3000. I browsed to http://localhost:3000 and it does work.
The Dockerfile before any changes I've made:
# syntax=docker/dockerfile:1
FROM node:12-alpine
RUN apk add --no-cache python g++ make
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
And the original command to run the docker:
docker run -dp 3000:3000 getting-started
Then, I tried to add a healthcheck in a few ways.
First way: I changed the Dockerfile as follows, then re-build and re-ran:
# syntax=docker/dockerfile:1
FROM node:12-alpine
HEALTHCHECK --interval=3s --timeout=1s CMD curl --fail http://localhost:3000 || exit 1
RUN apk add --no-cache python g++ make
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
Second way: I changed the run command as follows:
docker run -dp 3000:3000 --health-cmd "curl --fail http://localhost:3000 || exit 1" getting-started
In both cases, I checked the health status using docker ps, and after it ended the "health: starting" phase, it always entered the "unhealthy" phase. Never "healthy".
In both cases, I made sure that http://localhost:3000 works and returns HTTP status 200.
While experimenting in all sorts of ways, I tried to remove the || exit 1 part but it did not help. I tried to replace it with || exit 0, and then indeed it displayed "healthy", but that doesn't really mean anything.
Does anyone have any idea what am I doing wrong? I need to do something more complex with healthcheck, but for starters I want to succeed in making it work for a simple thing.
More details:
I'm using Windows 10 Enterprise Version 20H2, Docker version 20.10.7, build f0df350. I'm running the commands from Git Bash.
I'm trying to figure out the way docker handles commands presented to it.
For example if I run this the JS app starts fine.
docker run ...name etc.. /bin/bash -c "cd /video_recordings/voddirectory; pm2 start app.js; pm2 startup; stunnel; nginx;"
However, If I do it like this in a different order
"stunnel; nginx; cd /video_recordings/voddirectory; pm2 start app.js; pm2 startup;"
The JS app does not run.
What behavior is docker looking for to continue to the next command?
Similarly if I use in my docker file:
ENTRYPOINT stunnel && nginx -g 'daemon off;' && bash
and then do a
docker run ...name etc.. /bin/bash -c "cd /video_recordings/voddirectory; pm2 start app.js; pm2 startup;"
The JS app never runs.
Either && or ; between command, shell will execute in order. So, the first command needs to finish first and then the subsequent command run.
BUT you call nginx -g 'daemon off;' will make it run in the foreground. Therefore, it is never finished running. The commands follows won't run.
However, I am still not sure why stunnel; nginx; cd /video_recordings/voddirectory; pm2 start app.js; pm2 startup; does not run as the normal behaviour of Nginx should go background.
I want to have a script that runs in my docker container at every start/restart. It should run the bash of the container with:
cd app
Console/cake schema update
and
Console/cake migration
I tired to run a process or write something in my dockerfile, but that all doesnt work for me. I also read the "Run multiple services in a container" from docker, but i didnt find a solution.
COPY starter.sh /etc/init.d/starter.sh
RUN chmod +x /etc/init.d/starter.sh
RUN chmod 755 /etc/init.d/starter.sh
RUN update-rc.d starter defaults 10
RUN /etc/init.d/starter.sh
in my starter.sh is some test code like
RUN mkdir /var/www/hello
that i know if it works
Make use of ENTRYPOINT in dockerfile
Add these lines in dockerfile
COPY starter.sh /opt/starter.sh
ENTRYPOINT ["/opt/starter.sh"]
Update:
If you want to run apache web server then add these lines
ENTRYPOINT ["/path/to/apache2"]
CMD ["-D", "FOREGROUND"]
This will run apache2 as first process inside container in daemon mode.
If I have a Docker file that has at the end:
ENTRYPOINT /bin/bash
and run the container via docker run and type in the terminal
gulp
that gives me running gulp that I can easily terminate with Ctrl+C
but when I put gulp as default command to Dockerfile this way:
CMD ["/bin/bash", "-c", "gulp"]
or this:
ENTRYPOINT ["/bin/bash", "-c", "gulp"]
then when I run container via docker run the gulp is running but I can't terminate it via Ctrl+C hotkey.
The Dockerfile I used to build the image:
FROM node:8
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y libltdl-dev
WORKDIR /home/workspace
RUN npm install gulp -g
#works but cant kill gulp with Ctrl+C
#CMD ["/bin/bash", "-c", "gulp"]
#works but cant kill gulp with Ctrl+C
#ENTRYPOINT ["/bin/bash", "-c", "gulp"]
# need to type command gulp in cli to run it
# but I'm able to terminate gulp with Ctrl+C
ENTRYPOINT /bin/bash
It makes sense to me I can't terminate the default command for the container that is defined in Dockerfile because there would be no other command that could run once I terminate the default.
How can I state in Dockerfile that I want to run /bin/bash as default and on top of that gulp so If I terminate gulp I'll be switched back to the bash command line prompt?
Since gulp is a build tool, you'd generally run it in the course of building your container, not while you're starting it. Your Dockerfile might look roughly like
FROM node:8
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
RUN gulp
CMD yarn run start
When you run docker build, along the way it will print out things like
---> b35f4035db3f
Step 6/7 : RUN gulp
---> Running in 02071fceb21b
The important thing is that the last hex string that gets printed out in each step (the line before each Dockerfile command) is a valid Docker image ID. If your build goes wrong, you can
host$ sudo docker run --rm -it b35f4035db3f bash
root#38ed4261ab0f:/app# gulp
Once you've finished debugging the issue, you can check whatever fixes into your source control system, and rebuild the image as needed later.
Gulp is build tool you need to install using run command. This will commit the changes on top of your base image.
If you want to use it as a default command either using ENTRYPOINT or CMD in your dockerfile, then you can definitely not kill it with a ctrl+c since it is not a shell process, but in fact a container that you are running.
If in case you have your dockerfile an ENTRYPOINT. You can stop the container using docker stop.
NOTE: A container cannot be killed using ctrl+c, it needs to be stopped via:docker stop container_name
I'd like to have some kind of "development docker image" in which npm install is executed every time I restart my Docker Container (becuase I don't want to build, push and pull the new dev image every day from my local machine to our Docker server).
So I thought I could do sth. like this in my Dockerfile:
CMD npm install git+ssh://git#mycompany.de/my/project.git#develop && npm start
Sadly, this doesn't work. The container stops immediately after docker start and I don't know why, because this works:
RUN npm install git+ssh://git#mycompany.de/my/project.git#develop
CMD npm start
(Just for testing, that's of course not what I want to have). But maybe I have some wrong perception of CMD and someone could enlighten me?
Make your CMD point to a shell script.
CMD ["/my/path/to/entrypoint.sh"]
with that script being:
#!/bin/bash
npm install git+ssh://git#mycompany.de/my/project.git#develop
npm start
# whatever else
I find this easier for a few reasons:
Inevitably these commands increase with more being done
It makes it much easier to run containers interactively, as you can run them with docker run mycontainer /bin/bash and then execute your shell script manually. This is helpful in debugging