How to set up run bash file when starting run docker image - docker

I would like to run a bash file every time when I start to run the docker image, but couldn't figure how to do this with a few hours already. my bash file looks like,
#!/bin/bash
while true;
do
echo "hello world"
sleep 10
done
So what I am thinking about is, when I start running the docker, the bash file will also be running inside the docker image continuously, in this case, the bash file will do its job as long as the docker is on.
How to set this up? should I build this inside the docker image? or I can put bash file in run.sh so it happens when docker runs?

Just copy your script file with COPY or ADD in the docker file and then use the CMD command to run it..
For example, if u copy run.sh to /
Then in you dockerfile last line just add:
CMD run.sh
for more info please refer to: https://docs.docker.com/engine/reference/builder/ and search for 'CMD'
make sure that the file has the right privileges for running (otherwise, after COPY/ADD the file make RUN chmod +x run.sh
Summary:
//Dockerfile
// source image, for example node.js
FROM some_image
// copy run.sh script from your local file system into the image (to root dir)
ADD run.sh /
// add execute privillages to the script file (just in case it doesn't have)
RUN chmod +x run.sh
// default behaviour when containter will start
CMD run.sh
hope that's help.

Your Dockerfile needs look looke this
FROM <base_image>
ADD yourfile.sh /
CMD ['/yourfile.sh']

Related

How to copy a file from the host into a container while starting?

I am trying to build a docker image using the dockerfile, my purpose is to copy a file into a specific folder when i run the "docker run" command!
this my dockerfile code:
FROM openjdk:7
MAINTAINER MyPerson
WORKDIR /usr/src/myapp
ENTRYPOINT ["cp"]
CMD ["/usr/src/myapp"]
CMD ls /usr/src/myapp
After building my image without any error (using the docker build command), i tried to run my new image:
docker run myjavaimage MainClass.java
i got this error: ** cp: missing destination file operand after ‘MainClass.java’ **
How can i resolve this? thx
I think you want this Dockerfile:
FROM openjdk:7
WORKDIR /usr/src/myapp
COPY MainClass.java .
RUN javac MainClass.java
ENV CLASSPATH=/usr/src/myapp
CMD java MainClass
When you docker build this image, it COPYs your Java source file from your local directory into the image, compiles it, and sets some metadata telling the JVM where to find the resulting .class files. Then when you launch the container, it will run the single application you've packaged there.
It's common enough to use a higher-level build tool like Maven or Gradle to compile multiple files into a single .jar file. Make sure to COPY all of the source files you need in before running the build. In Java it seems to be common to build the .jar file outside of Docker and just COPY that in without needing a JDK, and that's a reasonable path too.
In the Dockerfile you show, Docker combines ENTRYPOINT and CMD into a single command and runs that command as the single main process of the container. If you provide a command of some sort at the docker run command, that overrides CMD but does not override ENTRYPOINT. You only get one ENTRYPOINT and one CMD, and the last one in the Dockerfile wins. So you're trying to run container processes like
# What's in the Dockerfile
cp /bin/sh -c "ls /usr/src/myapp"
# Via your docker run command
cp MainClass.java
As #QuintenScheppermans suggests in their answer you can use a docker run -v option to inject the file at run time, but this will happen after commands like RUN javac have already happened. You don't really want a workflow where the entire application gets rebuilt every time you docker run the container. Build the image during docker build time, or before.
Two things.
You have used CMD twice.
CMD can only be used once, think of it as the purpose of your docker image. Every time a container is run, it will always execute CMD if you want multiple commands, you should use RUN and then lastly, used CMD
FROM openjdk:
MAINTAINER MyPerson
WORKDIR /usr/src/
ENTRYPOINT ["cp"]
RUN /usr/src/myapp
RUN ls /usr/src/myapp
Copying stuff into image
There is a simple command COPY the syntax being COPY <from-here> <to-here>
Seems like you want to run myjavaimage so what you will do is
COPY /path/to/myjavaimage /myjavaimage
CMD myjavaimage MainClass.java
Where you see the arrows, I've just written dummy code. Replace that with the correct code.
Also, your Dockerfile is badly created.
ENTRYPOINT -> not sure why you'd do "cp", but it's an actual entrypoint. Could point to the root dir of your project or to an app that will be run.
Don't understand why you want to do ls /usr/src/myapp but if you do want to do it, use RUN and not CMD
Lastly,
Best way to debug docker containers are in interactive mode. That means ssh'ing in to your container, have a look around, run code, and see what is the problem.
Run this: docker run -it <image-name> /bin/bash and then have a look inside and it's usually the best way to see what causes issues.
This stackoverflow page perfectly answers your question.
COPY foo.txt /data/foo.txt
# where foo.txt is the relative path on host
# and /data/foo.txt is the absolute path in the image
If you need to mount a file when running the command:
docker run --name=foo -d -v ~/foo.txt:/data/foo.txt -p 80:80 image_name

How to run a shell script on a Alpine Docker container on startup?

I want to execute a shell script when a Alpine Docker container starts up.
You should create a shell file to be copied inside a new container and then setup the container entrypoint as the shell script you created, synthax would be like
FROM <docker/alpine-image>
#.... somecode
USER root
COPY /localmachinelocation/entrypoint.sh /containerlocation/entrypoint.sh
RUN chmod 544 /var/www/webapp/entrypoint.sh
#.... somecode
CMD /containerlocation/entrypoint.sh
You could specify the user you need to run the script.
Chmod ensure the script would be runnable by the owner of the script, you could change it for security matters.
Let's say you have your script in /home/user/script.sh, you could do
FROM <docker/alpine-image>
USER root
COPY /home/user/script.sh /root/entrypoint.sh
RUN chmod 544 /var/www/webapp/entrypoint.sh
CMD /root/entrypoint.sh
Your docker file would be like below
FROM alpine:latest
##Do your stuff
#####
##Last line
CMD ["entrypoint.sh"]
##OR
ENTRYPOINT ["entrypoint.sh"]
You could run it using bash.
RUN apk update
RUN apk add bash
ADD ./yourbashscript.sh .
RUN bash ./yourbashscript.sh
This way you can run as many shell scripts as you need and you're not limited to CMD.

No file found when using ENTRYPOINT

I am trying to use ENTRYPOINT and whenever I do that I am getting an error as no such file or directory
Dockerfile:
FROM ubuntu:18.04
COPY . /home
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s /usr/local/bin/docker-entrypoint.sh
WORKDIR /home
RUN chmod 777 /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["/bin/bash"]
I have tried giving it permission, tried running it with absolute path also tried this, tried it with #!/bin/bash & #!/bin/sh and in the end, I still get the file not found error.
I am not sure what the problem is.
The question you asked:
I don't remember exactly why, but the file isn't being found because you're calling it docker-entrypoint.sh rather than ./docker-entrypoint.sh.
The question you'll ask soon:
That doesn't entirely fix your problem. You've added execute privileges to the copy of docker-entrypoint.sh in /usr/local/bin, but there's another copy of the file in /home that gets found first and doesn't have execute privileges. You'll get a permissions error when you try to use it. An easy workaround (depending on what you want to do) consists of a modified entrypoint:
ENTRYPOINT ["/bin/bash", "docker-entrypoint.sh"]
Extra details if you'll be using Docker a lot:
Being able to enter a container or image to examine its contents is invaluable. For ubuntu-based images, write down the following line somewhere (replace bash with sh for basically every other linux OS):
docker run -it --rm --entrypoint=bash my_image_name
This will open up a shell in that image and let you play around in the same environment the Dockerfile is running in and debug whatever is causing you problems.

Can't figure out how to use cmd correctly to execute script in docker

I am trying to figure out how to get the CMD command in dockerfile to run a script on startup for docker run I know that using the RUN command will get the image to prerun that script when building the image but I want it to run the script everytime I run a new container using that image. The script is just a simple script that outputs the current date/time to a file.
Here is the dockerfile that works if I use RUN
# Pull base image
FROM alpine:latest
# gcr.io/dev-ihm-analytics-platform/practice_docker:ulta
WORKDIR /root/
RUN apk --update upgrade && apk add bash
ADD ./script.sh ./
RUN ./script.sh
Here is the same dockerfile that doesnt work with CMD
# Pull base image
FROM alpine:latest
# gcr.io/dev-ihm-analytics-platform/practice_docker:ulta
WORKDIR /root/
RUN apk --update upgrade && apk add bash
ADD ./script.sh ./
CMD ["./script.sh"]
I have tried all sorts of things after the CMD command like ["/script.sh"], ["bash script.sh"], ["bash", "./script.sh"], bash script.sh but I always get an error and I don't know what I am doing wrong. All I want is to
docker run -it name_of_container bash
and then find that the script has executed be seeing there is an output file with the run information in the container once I am inside
There’s three basic ways to do this:
You can RUN ./script.sh. It will happen once, at docker build time, and be baked into your image.
You can CMD ./script.sh. It will happen once, and be the single command the container runs. If you provide some alternate command (docker run ... bash for instance) that runs instead of this CMD.
You can write a custom entrypoint script that does this first-time setup, then runs the CMD or whatever got passed on the command line. The main container process is the entrypoint, and it gets passed the command as arguments. This script (and whatever it does inside) will get run on every startup. This script can look something like
#!/bin/sh
./script.sh
exec "$#"
It needs to be separately COPYd into the image, and then you’d set something like ENTRYPOINT ["./entrypoint.sh"].
(Given the problem as you’ve actually described it — you have a shell script and you want to run it and inspect the file output in an interactive shell — I’d just run it at your local command prompt and not involve Docker at all. This avoids all of these sequencing and filesystem mapping issues.)
There are multiple ways to achieve what you want, but your first attempt, with the RUN ./script.sh line is probably the best.
The CMD and ENTRYPOINT commands are overridable on the command-line as flags to the container run command. So, if you want to ensure that this is run every time you start the container, then it shouldn't be part of the CMD or ENTRYPOINT commands.
Well, iam using the CMD command to start my Java applications and when the container is inside the WORKDIR iam executing the following:
CMD ["/usr/bin/java", "-jar", "-Dspring.profiles.active=default", "/app.jar"]
Have you tried to remove the "." in the CMD command so it looks like that:
CMD ["/script.sh"]
There might be a different syntax when using RUN or CMD.

Docker Image Entry Point Script Not Found

I have a Dockerfile like:
FROM frolvlad/alpine-oraclejdk8:slim
ADD build/libs/zuul*.jar /app.jar
ADD src/main/script/startup.sh /startup.sh
EXPOSE 8080 8999
ENTRYPOINT ["/startup.sh"]
startup.sh looks like:
#!/bin/sh
set -e
if [ $# -eq 0 ]
then
echo "Environment value required"
exit 1
fi
java -jar -Xms400m -Xmx400m -Dlog4j.configurationFile=log4j2-qa2.xml -Djava.security.egd=file:/dev/./urandom -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8999 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.rmi.port=8999 app.jar
But when I run it with docker run, I got docker: Error response from daemon: Container command '/startup.sh' not found or does not exist... The shell script has execute permission.
I used the same way to run my other apps and they're all working fine. I wrote the files in Mac and tried to run the container in a Linux machine.
It turns out to be the ^M DOS-style line ending character that caused the issue. But since I'm editing in Mac and I checked several times with vim, I'm pretty sure the starting script in my local machine doesn't have that char. But when opened with vim inside the container, I can see ^M everywhere. So somehow that char gets added to startup.sh when copied into the image, which is weird. That prevents the script from being invoked.
The solution is to add dos2unix filename before the ENTRYPOINT in Dockerfile. But make sure that your base image has that utility.
The shell script has execute permission.
Are you sure though? (I mean within the container, onced ADDed)
To be sure, I would use the Dockerfile:
EXPOSE 8080 8999
COPY src/main/script/startup.sh /startup.sh
RUN chmod 755 /startup.sh
WORKDIR /
ENTRYPOINT ["/startup.sh"]
A container exits when its main process exits. So check that /startup.sh is not ending.
Particularly check that this java
java -jar -Xms400m -Xmx400m -Dlog4j.configurationFile=log4j2-qa2.xml \
-Djava.security.egd=file:/dev/./urandom \
-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.port=8999 \
-Dcom.sun.management.jmxremote.local.only=false \
-Dcom.sun.management.jmxremote.rmi.port=8999 \
app.jar
I think the trouble here is when you specify path like /app.jar, it is really difficult to make out where the current working directory actually is.
It can be any where and I suspect Docker must have accidentally copy your start.sh to a place whatever at that point ./ is.
You might want to ssh into the container and do a find to search and see where startup.sh is sitting.
docker ps -a - Find docker container ID
docker -i -t [CONTAINER_ID] /bin/bash - SSH inside
find . -name "startup.sh" - Look for file.
If you intend to copy the files through using ./ (current working directory), I think it would be better to specify where current is. And you can do this using the WORKDIR keyword.
Example:
WORKDIR /path/to/workdir
The WORKDIR instruction sets the working directory for any RUN, CMD,
ENTRYPOINT, COPY and ADD instructions that follow it in the
Dockerfile. If the WORKDIR doesn’t exist, it will be created even if
it’s not used in any subsequent Dockerfile instruction.
That way docker will not get confused.
See:
https://docs.docker.com/engine/reference/builder/#/workdir
I had a similar problem which led me to this thread, however my issue was not quite the exact same as yours. For me the problem was that I was using an alpine base image and my script was referencing #!/bin/bash, I just had to update mine to #!/bin/sh at the top of my script instead.

Resources