I am trying to launch multiples startup scripts where it automates my ci/cd tasks, but however, I am just getting the response of the entry point.sh, how can I force to execute other scripts?
entrypoint.sh
#!/bin/sh
IFS=$',\n' ## set IFS to break on comma or newline
for host in $HOSTS; do
## mkdir -p "letsencrypt/live/${host}/fullchain.pem"
echo "mkdir -p letsencrypt/live/${host}/fullchain.pem"
done
init-letsencrypt.sh
#!/bin/sh
echo "cool"
xxxxx:~/xx$ docker-compose logs nginx
Attaching to platform_nginx_1
nginx_1 | mkdir -p letsencrypt/live/domain.io/fullchain.pem
nginx_1 | mkdir -p letsencrypt/live/www.domain.io/fullchain.pem
nginx_1 | mkdir -p letsencrypt/live/api.domain.io/fullchain.pem
nginx_1 | mkdir -p letsencrypt/live/app.domain.io/fullchain.pem
FROM nginx:1.19.0-alpine
# Install certbot for letsencrypt certificates
RUN apk add --no-cache certbot
COPY . /etc/nginx/
# Directory needed for LetEncrypt certificates renewal
RUN mkdir /var/lib/certbot
# Add scripts and auto-renewal scripts
COPY ./bin/entrypoint.sh /entrypoint.sh
COPY ./bin/init-letsencrypt.sh /init-letsencrypt.sh
# Make them executable
RUN chmod +x /entrypoint.sh
RUN chmod +x /init-letsencrypt.sh
# Install certificates and launch
ENTRYPOINT /entrypoint.sh
To run something at build time, you should use RUN.
CMD and ENTRYPOINT are used to launch the main process of your container. A container is "just" a process that is encapsulated in namespaces, basically. A container runs until this process stops or dies. When you specify you entrypoint.sh as an entrypoint, you are saying that the main process of you container is this script. To say it differently : the only goal of this container is to execute this script then die
You should useRUN to launch both of your scripts, then CMD or ENTRYPOINT to launch your nginx (most probably ENTRYPOINT, you wilk get why if you read the docs ;))
Related
I'm using Docker with Windows 10. The Dockerfile for my app includes the following lines:
# Add a script to be executed every time the container starts.
COPY docker/entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
The problem is that because the OS is Win 10, there is no /usr/bin/ path--the equivalent I guess would be C:\Program Files. So when I run docker-compose up (in VS Code's Bash terminal), I get the following error:
my_app_name | exec /usr/bin/entrypoint.sh: no such file or directory
my_app_name exited with code 1
Changing the path in the Dockerfile doesn't seem like a good idea, because then Linux users will have the same problem. What is the right way to handle this for compatibility with both Windows and Linux?
EDIT: the entrypoint.sh script is as follows:
#!/bin/bash
set -e
# Remove a potentially pre-existing server.pid for Rails.
rm -f /docker-rails/tmp/pids/server.pid
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$#"
and the entire Dockerfile is:
FROM ruby:2.6.2
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client cron
RUN mkdir /docker-rails
WORKDIR /docker-rails
COPY Gemfile /docker-rails/Gemfile
COPY Gemfile.lock /docker-rails/Gemfile.lock
WORKDIR /docker-rails
RUN bundle install
COPY . /docker-rails
# Add a script to be executed every time the container starts.
COPY docker/entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
I have a base container that has an ENTRYPOINT that runs as root:
Base container Dockerfile:
FROM docker.io/opensuse/leap:latest
# Add scripts to be executed during startup
COPY startup /startup
ADD https://example.com/install-ca-cert.sh /startup/startup.d/install-ca-cert-base.sh
RUN chmod +x /startup/* /startup/startup.d/*
# Add Tini
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--", "/startup/startup.sh"]
And a derived container that uses gosu to perform a root step down after the startup scripts have been run as root:
Derived container Dockerfile:
ADD ./gosu-entrypoint.sh /usr/local/bin/gosu-entrypoint.sh
RUN chmod +x /usr/local/bin/gosu-entrypoint.sh
ENTRYPOINT ["/usr/local/bin/gosu-entrypoint.sh"]
CMD ["whoami"]
gosu-entrypoint.sh:
#!/bin/bash
set -e
# Call original entrypoint (as root)
/tini -s /startup/startup.sh
# If GOSU_USER environment variable is set, execute the specified command as that user
if [ -n "$GOSU_USER" ]; then
useradd --shell /bin/bash --system --user-group --create-home #GOSU_USER
exec /usr/local/bin/gosu $GOSU_USER "$#"
else
# else GOSU_USER environment variable is not set, execute the specified command as the default (root) user
exec "$#"
fi
This all works fine, by setting the GOSU_USER env var and running the container, the startup scripts are executed as root, and the CMD is executed as GOSU_USER:
export GOSU_USER=jim
docker run my-derived-container
# outputs "jim"
...
unset GOSU_USER
docker run my-derived-container
# outputs root
However, I am trying to determine if the above approach (maybe modified) is able to work with the Kubernetes securityContext runAsUser and runAsGroup directives?
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
I think these directives are turned into the containerd equivalent of docker run --user=xxx:yyy, so as such, they wouldn't work, since this:
docker run --user $(id -u):$(id -g) my-derived-container
results in a permissions error due to the startup scripts not being run as root anymore.
I have seen examples of entrypoint.sh scripts that allow the container to be started with the --user flag, but not sure if thats something I can use or not, i.e. even if the --user flag is provided, I still need the startup scripts to run as root:
https://github.com/docker-library/redis/blob/master/5.0/docker-entrypoint.sh#L11
# allow the container to be started with `--user`
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
find . \! -user redis -exec chown redis '{}' +
exec gosu redis "$0" "$#"
fi
exec "$#"
Update: Looking again at the above redis example, I'm not sure if this does allow the container to be started with --user as it states, looking at the Dockerfile, redis-server is the CMD passed to the script ($1):
https://github.com/docker-library/redis/blob/master/5.0/Dockerfile#L118
CMD ["redis-server"]
and the redis user is just hardcoded in the above docker-entrypoint.sh:
I want to build my own custom docker image from nginx image.
I override the ENTRYPOINT of nginx with my own ENTERYPOINT file.
Which bring me to ask two questions:
I think I lose some commands from nginx by doing so. am I right? (like expose the port.. )
If I want to restart the nginx I run this commands: nginx -t && systemctl reload nginx. but the output is:
nginx: configuration file /etc/nginx/nginx.conf test is successful
/entrypoint.sh: line 5: systemctl: command not found
How to fix that?
FROM nginx:latest
WORKDIR /
RUN echo "deb http://ftp.debian.org/debian stretch-backports main" >> /etc/apt/sources.list
RUN apt-get -y update && \
apt-get -y install apt-utils && \
apt-get -y upgrade && \
apt-get -y clean
# I ALSO WANT TO INSTALL CERBOT FOR LATER USE (in my entrypoint file)
RUN apt-get -y install python-certbot-nginx -t stretch-backports
# COPY ./something ./tothisimage
# COPY ./something ./tothisimage
# COPY ./something ./tothisimage
# COPY ./something ./tothisimage
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["bash", "/entrypoint.sh"]
entrypoint.sh
echo "in entrypoint"
# I want to run some commands here...
# After I want to run nginx normally....
nginx -t && systemctl reload nginx
echo "after reload"
this will work using service command:
echo "in entrypoint"
# I want to run some commands here...
# After I want to run nginx normally....
nginx -t && service nginx reload
echo "after reload"
output:
in entrypoint
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Restarting nginx: nginx.
after reload
Commands like service and systemctl mostly just don't work in Docker, and you should totally ignore them.
At the point where your entrypoint script is running, it is literally the only thing that is running. That means you don't need to restart nginx, because it hasn't started the first time yet. The standard pattern here is to use the entrypoint script to do some first-time setup; it will be passed the actual command to run as arguments, so you need to tell it to run them.
#!/bin/sh
echo "in entrypoint"
# ... do first-time setup ...
# ...then run the command, nginx or otherwise
exec "$#"
(Try running docker run --rm -it myimage /bin/sh. You will get an interactive shell in a new container, but after this first-time setup has happened.)
The one thing you do lose in your Dockerfile is the default CMD from the base image (setting an ENTRYPOINT resets that). You need to add back that CMD:
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
You should keep the other settings from the base image, like ENV definitions and EXPOSEd ports.
The "systemctl" command is specific to some SystemD based operating system. But you do not have such a SystemD daemon running on PID 1 - so even if you install those packages it wont work.
You can only check in the nginx.service file which command the "reload" would execute for real. Or have something like the docker-systemctl-replacement script do it for you.
I'm trying to configure my docker container so it's possible to ssh into it (the container will be run on Azure). I managed to create an image that enables user to ssh into a container created from that image, the Dockerfile looks like that (it's not mine, I found it on the internet):
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
EXPOSE 2222
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
COPY sshd_config /etc/ssh
RUN echo 'root:Docker' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
CMD ["/usr/sbin/sshd", "-D"]
I'm using mcr.microsoft.com/dotnet/core/sdk:2.2-stretch because it's what I need later on to run the application.
Having the Dockerfile above, I run docker build . -t ssh. I can confirm that it's possible to ssh into a container created from ssh image with following instructions:
docker run -d -p 0.0.0.0:2222:22 --name ssh ssh
ssh root#localhost -p 2222
My application's Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["Application.WebAPI/Application.WebAPI.csproj", "Application.WebAPI/"]
COPY ["Processing.Dependency/Processing.Dependency.csproj", "Processing.Dependency/"]
COPY ["Processing.QueryHandling/Processing.QueryHandling.csproj", "Processing.QueryHandling/"]
COPY ["Model.ViewModels/Model.ViewModels.csproj", "Model.ViewModels/"]
COPY ["Core.Infrastructure/Core.Infrastructure.csproj", "Core.Infrastructure/"]
COPY ["Model.Values/Model.Values.csproj", "Model.Values/"]
COPY ["Sql.Business/Sql.Business.csproj", "Sql.Business/"]
COPY ["Model.Events/Model.Events.csproj", "Model.Events/"]
COPY ["Model.Messages/Model.Messages.csproj", "Model.Messages/"]
COPY ["Model.Commands/Model.Commands.csproj", "Model.Commands/"]
COPY ["Sql.Common/Sql.Common.csproj", "Sql.Common/"]
COPY ["Model.Business/Model.Business.csproj", "Model.Business/"]
COPY ["Processing.MessageBus/Processing.MessageBus.csproj", "Processing.MessageBus/"]
COPY [".Processing.CommandHandling/Processing.CommandHandling.csproj", "Processing.CommandHandling/"]
COPY ["Processing.EventHandling/Processing.EventHandling.csproj", "Processing.EventHandling/"]
COPY ["Sql.System/Sql.System.csproj", "Sql.System/"]
COPY ["Application.Common/Application.Common.csproj", "Application.Common/"]
RUN dotnet restore "Application.WebAPI/Application.WebAPI.csproj"
COPY . .
WORKDIR "/src/Application.WebAPI"
RUN dotnet build "Application.WebAPI.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "Application.WebAPI.csproj" -c Release -o /app
FROM ssh AS final
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Application.WebApi.dll"]
As you can see I'm using ssh image as a base image in the final stage. Even though I was able to sshe into the container created from ssh image, I'm unable to ssh into a container created from the latter Dockerfile. Here is the docker-compose.yml I'm using in order to ease starting the container:
version: '3.7'
services:
application.webapi:
image: application.webapi
container_name: webapi
ports:
- "0.0.0.0:5000:80"
- "0.0.0.0:2222:22"
build:
context: .
dockerfile: Application.WebAPI/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=docker
When I run docker exec -it webapi bashand execute service ssh status, I'm getting [FAIL] sshd is not running ... failed! - but when I do service ssh start and try to ssh into that container, it works. Unfortunately this approach is not acceptable, ssh daemon should launch itself on startup.
I tried using cron and other stuff available on debian but it's a slim version and systemd is not available there - I'm also not fond of installing hundreds of things on slim versions.
Do you have any ideas what could be wrong here?
You have conflicting startup command definitions in your final image. Note that CMD does not simply run a command in your image, it defines the startup command, and has a complex interaction with ENTRYPOINT (in short: if both are present, CMD just supplies extra arguments to ENTRYPOINT).
You can see the table of possibilities in the Dockerfile documentation: https://docs.docker.com/engine/reference/builder/. In addition, there's a bonus complication when you mix and match CMD and ENTRYPOINT in different layers:
Note: If CMD is defined from the base image, setting ENTRYPOINT will reset CMD to an empty value. In this scenario, CMD must be defined in the current image to have a value.
As far as I know, you can't get what you want just by layering images. You will need to create a startup script in your final image that both runs sshd -D and then runs dotnet Application.WebApi.dll.
I am currently trying to deal with a deployment to a kubernetes cluster. The deployment keeps failing with the response
Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"/entrypoint.sh\": permission denied"
I have tried to change the permissions on the file which seem to succeed as if I ls -l I get -rwxr-xr-x as the permissions for the file.
I have tried placing the chmod command both in the dockerfile itself and prior to the image being built and uploaded but neither seems to make any difference.
Any ideas why I am still getting the error?
dockerfile below
FROM node:10.15.0
CMD []
ENV NODE_PATH /opt/node_modules
# Add kraken files
RUN mkdir -p /opt/kraken
ADD . /opt/kraken/
# RUN chown -R node /opt/
WORKDIR /opt/kraken
RUN npm install && \
npm run build && \
npm prune --production
# Add the entrypoint
COPY ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
USER node
ENTRYPOINT ["/entrypoint.sh"]
This error is not about entrypoint error but command inside. Always start scripts with "sh script.sh" either entrypoint or cmd. In this case it would be: ENTRYPOINT ["sh", "entrypoint.sh"]
I created a github action with a Dockerfile and entrypoint.sh file. I run command 'chmod +x' in my computer and push to github repository. I did not RUN 'chmod +x' in Dockerfile. It works.
tray docker exec -it /bin/sh