auto start script in debian - docker

I want to have an auto start script for my project. I use docker and want to try a script that starts if the container start.
I tried to run update-rc and i dont have any problems but the symbolic links dont get generated. I checked it in the file explorer and with my script:
mkdir /var/www/$(date +%Y%m%d_%H%M%S)
But nothing happend.
This is in my dockerfile:
COPY starter.sh /etc/init.d/starter.sh
RUN chmod +x /etc/init.d/starter.sh
RUN chmod 755 /etc/init.d/starter.sh
RUN update-rc.d starter.sh defaults 10
I dont get any error messages. Thats my problem :)

Use the below instructions in your docker file:
COPY starter.sh /starter.sh
RUN chmod +x /starter.sh && chmod 0755 /starter.sh
ENTRYPOINT ["/starter.sh"]
CMD ["defaults", "10"]

The things in RUN statements are executed during image build time. They can generate files in the Docker image, but not create processes which remain running after the image build finishes (how would that even work?)
The simplest solution is probably to create an entry point which starts your service(s) and then runs any user-supplied commands.

Related

Setup different user permissions on files copied in Dockerfile

I have this Dockerfile setup:
FROM node:14.5-buster-slim AS base
WORKDIR /app
FROM base AS production
ENV NODE_ENV=production
RUN chown -R node:node /app
RUN chmod 755 /app
USER node
... other copies
COPY ./scripts/startup-production.sh ./
COPY ./scripts/healthz.sh ./
CMD ["./startup-production.sh"]
The problem I'm facing is that I can't execute ./healthz.sh because it's only executable by the node user. When I commented out the two RUN and the USER commands, I could execute the file just fine. But I want to enforce the executable permissions only to the node for security reasons.
I need the ./healthz.sh to be externally executable by Kubernetes' liveness & rediness probes.
How can I make it so? Folder restructuring or stuff like that are fine with me.
In most cases, you probably want your code to be owned by root, but to be world-readable, and for scripts be world-executable. The Dockerfile COPY directive will copy in a file with its existing permissions from the host system (hidden in the list of bullet points at the end is a note that a file "is copied individually along with its metadata"). So the easiest way to approach this is to make sure the script has the right permissions on the host system:
# mode 0755 is readable and executable by everyone but only writable by owner
chmod 0755 healthz.sh
git commit -am 'make healthz script executable'
Then you can just COPY it in, without any special setup.
# Do not RUN chown or chmod; just
WORKDIR /app
COPY ./scripts/healthz.sh .
# Then when launching the container, specify
USER node
CMD ["./startup-production.sh"]
You should be able to verify this locally by running your container and manually invoking the health-check script
docker run -d --name app the-image
# possibly with a `docker exec -u` option to specify a different user
docker exec app /app/healthz.sh && echo OK
The important thing to check is that the file is world-executable. You can also double-check this by looking at the built container
docker run --rm the-image ls -l /app/healthz.sh
That should print out one line, starting with a permission string -rwxr-xr-x; the last three r-x are the important part. If you can't get the permissions right another way, you can also fix them up in your image build
COPY ./scripts/healthz.sh .
# If you can't make the permissions on the original file right:
RUN chmod 0755 *.sh
You need to modify user Dockerfile CMD command like this : ["sh", "./startup-production.sh"]
This will interpret the script as sh, but it can be dangerous if your script is using bash specific features like [[]] with #!/bin/bash as its first line.
Moreover I would say use ENTRYPOINT here instead of CMD if you want this to run whenever container is up

container start run a script

I want to have a script that runs in my docker container at every start/restart. It should run the bash of the container with:
cd app
Console/cake schema update
and
Console/cake migration
I tired to run a process or write something in my dockerfile, but that all doesnt work for me. I also read the "Run multiple services in a container" from docker, but i didnt find a solution.
COPY starter.sh /etc/init.d/starter.sh
RUN chmod +x /etc/init.d/starter.sh
RUN chmod 755 /etc/init.d/starter.sh
RUN update-rc.d starter defaults 10
RUN /etc/init.d/starter.sh
in my starter.sh is some test code like
RUN mkdir /var/www/hello
that i know if it works
Make use of ENTRYPOINT in dockerfile
Add these lines in dockerfile
COPY starter.sh /opt/starter.sh
ENTRYPOINT ["/opt/starter.sh"]
Update:
If you want to run apache web server then add these lines
ENTRYPOINT ["/path/to/apache2"]
CMD ["-D", "FOREGROUND"]
This will run apache2 as first process inside container in daemon mode.

No file found when using ENTRYPOINT

I am trying to use ENTRYPOINT and whenever I do that I am getting an error as no such file or directory
Dockerfile:
FROM ubuntu:18.04
COPY . /home
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s /usr/local/bin/docker-entrypoint.sh
WORKDIR /home
RUN chmod 777 /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["/bin/bash"]
I have tried giving it permission, tried running it with absolute path also tried this, tried it with #!/bin/bash & #!/bin/sh and in the end, I still get the file not found error.
I am not sure what the problem is.
The question you asked:
I don't remember exactly why, but the file isn't being found because you're calling it docker-entrypoint.sh rather than ./docker-entrypoint.sh.
The question you'll ask soon:
That doesn't entirely fix your problem. You've added execute privileges to the copy of docker-entrypoint.sh in /usr/local/bin, but there's another copy of the file in /home that gets found first and doesn't have execute privileges. You'll get a permissions error when you try to use it. An easy workaround (depending on what you want to do) consists of a modified entrypoint:
ENTRYPOINT ["/bin/bash", "docker-entrypoint.sh"]
Extra details if you'll be using Docker a lot:
Being able to enter a container or image to examine its contents is invaluable. For ubuntu-based images, write down the following line somewhere (replace bash with sh for basically every other linux OS):
docker run -it --rm --entrypoint=bash my_image_name
This will open up a shell in that image and let you play around in the same environment the Dockerfile is running in and debug whatever is causing you problems.

Why does container does't execute scripts inside /etc/my_init.d/ on startup?

I have the following Dockerfile:
FROM phusion/baseimage:0.9.16
RUN mv /build/conf/ssh-setup.sh /etc/my_init.d/ssh-setup.sh
EXPOSE 80 22
CMD ["node", "server.js"]
My /build/conf/ssh-setup.sh looks like the following:
#!/bin/sh
set -e
echo "${SSH_PUBKEY}" >> /var/www/.ssh/authorized_keys
chown www-data:www-data -R /var/www/.ssh
chmod go-rwx -R /var/www/.ssh
It just adds SSH_PUBKEY env to /var/www/.ssh/authorized_keys to enable ssh access.
I run my container just like the following:
docker run -d -p 192.168.99.100:80:80 -p 192.168.99.100:2222:22 \
-e SSH_PUBKEY="$(cat ~/.ssh/id_rsa.pub)" \
--name dev hub.core.test/dev
My container starts fine but unfortunately /etc/my_init.d/ssh-setup.sh script does't get executed and I'm unable to ssh my container.
Could you help me what is the reason why /var/www/.ssh/authorized_keys doesn't get executed on starting of my container?
I had a pretty similar issue, also using phusion/baseimage. It turned out that my start script needed to be executable, e.g.
RUN chmod +x /etc/my_init.d/ssh-setup.sh
Note:
I noticed you're not using baseimage's init system ( maybe on purpose? ). But, from my understanding of their manifesto, doing that forgoes their whole "a better init system" approach.
My understanding is that they want you to, in your case, move your start command of node server.js to a script within my_init.d, e.g. /etc/my_init.d/start.sh and in your dockerfile use their init system instead as the start command, e.g.
FROM phusion/baseimage:0.9.16
RUN mv /build/conf/start.sh /etc/my_init.d/start.sh
RUN mv /build/conf/ssh-setup.sh /etc/my_init.d/ssh-setup.sh
RUN chmod +x /etc/my_init.d/start.sh
RUN chmod +x /etc/my_init.d/ssh-setup.sh
EXPOSE 80 22
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
That'll start baseimage's init system, which will then go and look in your /etc/my_init.d/ and execute all the scripts in there in alphabetical order. And, of course, they should all be executable.
My references for this are: Running start scripts and Getting Started.
As the previous answer states you did not execute ssh-setup.sh. You can only have one process in a Docker container (that is a lie, but it will do for now). Why not run ssh-setup.sh as your CMD/ENTRYPOINT process and have ssh-setup.sh exec into your final command, i.e.
exec node server.js
Or cleaner, have a script, like boot.sh, which runs any init scripts, like ssh-setup.sh, then execs to node.
Because you didn't invoke /etc/my_init.d/ssh-setup.sh when you started your container.
you should call it in CMD or ENTRYPOINT, read more here
RUN executes command(s) in a new layer and creates a new image. E.g.,
it is often used for installing software packages.
CMD sets default
command and/or parameters, which can be overwritten from command line
when docker container runs.
ENTRYPOINT configures a container that
will run as an executable.

Docker Image Entry Point Script Not Found

I have a Dockerfile like:
FROM frolvlad/alpine-oraclejdk8:slim
ADD build/libs/zuul*.jar /app.jar
ADD src/main/script/startup.sh /startup.sh
EXPOSE 8080 8999
ENTRYPOINT ["/startup.sh"]
startup.sh looks like:
#!/bin/sh
set -e
if [ $# -eq 0 ]
then
echo "Environment value required"
exit 1
fi
java -jar -Xms400m -Xmx400m -Dlog4j.configurationFile=log4j2-qa2.xml -Djava.security.egd=file:/dev/./urandom -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8999 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.rmi.port=8999 app.jar
But when I run it with docker run, I got docker: Error response from daemon: Container command '/startup.sh' not found or does not exist... The shell script has execute permission.
I used the same way to run my other apps and they're all working fine. I wrote the files in Mac and tried to run the container in a Linux machine.
It turns out to be the ^M DOS-style line ending character that caused the issue. But since I'm editing in Mac and I checked several times with vim, I'm pretty sure the starting script in my local machine doesn't have that char. But when opened with vim inside the container, I can see ^M everywhere. So somehow that char gets added to startup.sh when copied into the image, which is weird. That prevents the script from being invoked.
The solution is to add dos2unix filename before the ENTRYPOINT in Dockerfile. But make sure that your base image has that utility.
The shell script has execute permission.
Are you sure though? (I mean within the container, onced ADDed)
To be sure, I would use the Dockerfile:
EXPOSE 8080 8999
COPY src/main/script/startup.sh /startup.sh
RUN chmod 755 /startup.sh
WORKDIR /
ENTRYPOINT ["/startup.sh"]
A container exits when its main process exits. So check that /startup.sh is not ending.
Particularly check that this java
java -jar -Xms400m -Xmx400m -Dlog4j.configurationFile=log4j2-qa2.xml \
-Djava.security.egd=file:/dev/./urandom \
-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.port=8999 \
-Dcom.sun.management.jmxremote.local.only=false \
-Dcom.sun.management.jmxremote.rmi.port=8999 \
app.jar
I think the trouble here is when you specify path like /app.jar, it is really difficult to make out where the current working directory actually is.
It can be any where and I suspect Docker must have accidentally copy your start.sh to a place whatever at that point ./ is.
You might want to ssh into the container and do a find to search and see where startup.sh is sitting.
docker ps -a - Find docker container ID
docker -i -t [CONTAINER_ID] /bin/bash - SSH inside
find . -name "startup.sh" - Look for file.
If you intend to copy the files through using ./ (current working directory), I think it would be better to specify where current is. And you can do this using the WORKDIR keyword.
Example:
WORKDIR /path/to/workdir
The WORKDIR instruction sets the working directory for any RUN, CMD,
ENTRYPOINT, COPY and ADD instructions that follow it in the
Dockerfile. If the WORKDIR doesn’t exist, it will be created even if
it’s not used in any subsequent Dockerfile instruction.
That way docker will not get confused.
See:
https://docs.docker.com/engine/reference/builder/#/workdir
I had a similar problem which led me to this thread, however my issue was not quite the exact same as yours. For me the problem was that I was using an alpine base image and my script was referencing #!/bin/bash, I just had to update mine to #!/bin/sh at the top of my script instead.

Resources