If I have a Docker file that has at the end:
ENTRYPOINT /bin/bash
and run the container via docker run and type in the terminal
gulp
that gives me running gulp that I can easily terminate with Ctrl+C
but when I put gulp as default command to Dockerfile this way:
CMD ["/bin/bash", "-c", "gulp"]
or this:
ENTRYPOINT ["/bin/bash", "-c", "gulp"]
then when I run container via docker run the gulp is running but I can't terminate it via Ctrl+C hotkey.
The Dockerfile I used to build the image:
FROM node:8
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y libltdl-dev
WORKDIR /home/workspace
RUN npm install gulp -g
#works but cant kill gulp with Ctrl+C
#CMD ["/bin/bash", "-c", "gulp"]
#works but cant kill gulp with Ctrl+C
#ENTRYPOINT ["/bin/bash", "-c", "gulp"]
# need to type command gulp in cli to run it
# but I'm able to terminate gulp with Ctrl+C
ENTRYPOINT /bin/bash
It makes sense to me I can't terminate the default command for the container that is defined in Dockerfile because there would be no other command that could run once I terminate the default.
How can I state in Dockerfile that I want to run /bin/bash as default and on top of that gulp so If I terminate gulp I'll be switched back to the bash command line prompt?
Since gulp is a build tool, you'd generally run it in the course of building your container, not while you're starting it. Your Dockerfile might look roughly like
FROM node:8
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
RUN gulp
CMD yarn run start
When you run docker build, along the way it will print out things like
---> b35f4035db3f
Step 6/7 : RUN gulp
---> Running in 02071fceb21b
The important thing is that the last hex string that gets printed out in each step (the line before each Dockerfile command) is a valid Docker image ID. If your build goes wrong, you can
host$ sudo docker run --rm -it b35f4035db3f bash
root#38ed4261ab0f:/app# gulp
Once you've finished debugging the issue, you can check whatever fixes into your source control system, and rebuild the image as needed later.
Gulp is build tool you need to install using run command. This will commit the changes on top of your base image.
If you want to use it as a default command either using ENTRYPOINT or CMD in your dockerfile, then you can definitely not kill it with a ctrl+c since it is not a shell process, but in fact a container that you are running.
If in case you have your dockerfile an ENTRYPOINT. You can stop the container using docker stop.
NOTE: A container cannot be killed using ctrl+c, it needs to be stopped via:docker stop container_name
Related
In this Dockerfile i have an ENTRYPOINT that calls a script that simply logs an echo "testing". This output works locally when I build and run the Dockerfile. It also logs to cloudwatch when I use in conjunction with a docker-compose for aws.
However the RUN and CMD commands do not output anything to the console or cloudwatch, how do i see their output? I would expect at least some errors
ENTRYPOINT bash -c "/migrate.sh"
WORKDIR /
RUN yarn
CMD ["yarn migration:run", "dist/src/main"]
I'm building just with docker build -t test:test . then docker run <imagename>
The RUN statement in the Dockerfile is only invoked when you build the container (at which point you should see the output of yarn in this case). When you docker run the container it will just execute the ENTRYPOINT and/or CMD (in this case the output of the ENTRYPOINT as there is no CMD)
I have the following Dockerfile
FROM rikorose/gcc-cmake
RUN git clone https://github.com/hect1995/UBIMET_Challenge.git
WORKDIR /UBIMET_Challenge
RUN mkdir build
WORKDIR build
#RUN apt-get update && apt-get -y install cmake=3.13.1-1ubuntu3 protobuf-compiler
RUN cmake ..
RUN make
CMD ["./ubimet /UBIMET_Challenge/data/1706221600.29 output.csv"]
Even though it says it executes the last line when building it does not (or it does it incorrectly) as if you run last line it should generate 2 files that are not being generated once I check them using:
docker run -t -i trial /bin/bash
Nevertheless, if I get inside the container and from there I run:
./ubimet /UBIMET_Challenge/data/1706221600.29 output.csv
It generates the files, so why does it not generate the files while building?
CMD is the default command to run when you start your container. You are overriding
this by passing /bin/bash to docker run.
Either change CMD to RUN (to run your script at build time) or run docker run without the trailing command (to run when you start the container).
You are using CMD wrong. CMD has 3 forms, none of which are what you are using:
CMD
The CMD instruction has three forms:
CMD ["executable","param1","param2"] (exec form, this is the preferred form)
CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
CMD command param1 param2 (shell form)
You can use CMD like this:
CMD ["./ubimet", "/UBIMET_Challenge/data/1706221600.29", "output.csv"]
I have a python image that launches a web app and I'm wondering if it's possible to run pytest from container - I would like to choose if I want to run the app or run the tests.
Is possible?
My dockerfile looks like:
FROM python:3.7-slim-buster
COPY ./ ./x
WORKDIR ./x
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["gunicorn", "-b", "0.0.0.0:5000", "--log-level=info", "app:app"]
Is possible to run something like docker run x --someargumenttolaunchtests?
You can set an ARGS value in your dockerfile which is an argument that you provided during build time. If you want to provide an arguement in run time, you can set an environment variable via docker run -e some_environment.
Then, you can, with a bash script, choose what you want to run. So your bash script provides your if some_eivonrment = ? then etc. You would have to make this bash script prior to run time and either COPY it to your dockerfile or bind it on run time.
So here is an example of a bash script.
#!bin/bash
ENVIRONMENT=$(export some_environment)
if("$ENVIRONMENT" = "test") ; then
python run_test.py
else
python main.py
fi
Before I forget, you need to set the permissions for this bash script.
So in your dockerfile:
COPY ./bash_script.sh /app
WORKDIR /app
RUN chmod u+x bash_script.sh
You can completely override the entrypoint script and avoid gunicorn. Use something like:
docker run --rm -it --entrypoint bash myimagename pytest
I have to execute two commands on the docker file, but both these commands are attached to the terminal and block the execution from the next.
dockerfile:
FROM sinet/nginx-node:latest
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
RUN git clone https://name:pass#bitbucket.org/joaocromg/front-web-alferes.git
WORKDIR /usr/src/app/front-web-alferes
RUN npm install
RUN npm install bower -g
RUN npm install gulp -g
RUN bower install --allow-root
COPY default.conf /etc/nginx/conf.d/
RUN nginx -g 'daemon off;' & # command 1 blocking
CMD ["gulp watch-dev"] # command 2 not executed
Someone know how can I solve this?
Try creating a script like this:
#!/bin/sh
nginx -g 'daemon off;' &
gulp watch-dev
And then execute it in your CMD:
CMD /bin/my-script.sh
Also, notice your last line would not have worked:
CMD ["gulp watch-dev"]
It needed to be either:
CMD gulp watch-dev
or:
CMD ["gulp", "watch-dev"]
Also, notice that RUN is for executing a command that will change your image state (like RUN apt install curl), not for executing a program that needs to be running when you run your container. From the docs:
The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.
I suggest you try supervisord in this case. http://supervisord.org/
Edit: Here is an dockerized example of httpd and ssh daemon: https://riptutorial.com/docker/example/14132/dockerfile-plus-supervisord-conf
The answer here is that RUN nginx -g 'daemon off;' is intentionally starting nginx in the foreground, which is blocking your second command. This command is intended to start docker containers with this as the foreground process. Running RUN nginx will start nginx, create the master and child nodes and (hopefully) exit with a zero status code. Although as mentioned above, this is not the intended use of run, so a bash script would work best in this case.
I am building Scigraph database on my local machine and trying to move this entire folder to docker and run it, when I run the shell script on my local machine it runs without error when I add the same folder inside docker and try to run it fails
Am I doing this right way, here's my DOckerfile
FROM goyalzz/ubuntu-java-8-maven-docker-image
ADD ./SciGraph /usr/share/SciGraph
WORKDIR /usr/share/SciGraph/SciGraph-services
RUN pwd
EXPOSE 9000
CMD ['./run.sh']
when I try to run it I'm getting this error
docker run -p9005:9000 test
/bin/sh: 1: [./run.sh]: not found
if I run it using below command it works
docker run -p9005:9000 test -c "cd /usr/share/SciGraph/SciGraph-services && sh run.sh"
as I already marked the directory as WORKDIR and running the script inside docker using CMD it throws error
For scigraph as provided in their ReadMe, you can to run mvn install before you run their services. You can set your shell to bash and use a docker compose to run the docker image as shown below
Dockerfile
FROM goyalzz/ubuntu-java-8-maven-docker-image
ADD ./SciGraph /usr/share/SciGraph
SHELL ["/bin/bash", "-c"]
WORKDIR /usr/share/SciGraph
RUN mvn -DskipTests -DskipITs -Dlicense.skip=true install
RUN cd /usr/share/SciGraph/SciGraph-services && chmod a+x run.sh
EXPOSE 9000
build the scigraph docker image by running
docker build . -t scigraph_test
docker-compose.yml
version: '2'
services:
scigraph-server:
image: scigraph_test
working_dir: /usr/share/SciGraph/SciGraph-services
command: bash run.sh
ports:
- 9000:9000
give / after SciGraph-services and change it to "sh run.sh" ................ and look into run.sh file permissions also
It is likely that your run.sh doesn't have the #!/bin/bash header, so it cannot be executed only by running ./run.sh. Nevertheless, always prefer to run scripts as /bin/bash foo.sh or /bin/sh foo.sh when in docker, especially because you don't know what changes files have been sourced in images downloaded from public repositories.
So, your CMD statement would be:
CMD /bin/bash -c "/bin/bash run.sh"
You have to add the shell and the executable to the CMD array ...
CMD ["/bin/sh", "./run.sh"]