I have an nginx container, with the following Dockerfile:
FROM nginx:1.19.2
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY ./conf.d /etc/nginx/conf.d
WORKDIR /etc/nginx/conf.d
RUN ln -s /etc/nginx/conf.d/my-site/my-domain.generic.conf \
&& ln -s /etc/nginx/conf.d/my-site/my-domain.conf
COPY ./certs/* /etc/ssl/
and I have the following docker-compose file:
version: '3.5'
services:
my-site_nginx:
container_name: my-site_nginx
build:
context: ./nginx
network: host
image: my-site_nginx
ports:
- '80:80'
- '443:443' # SSL
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
I am looking for a way to have the nginx service inside the container to autoreload (nginx -s reload) when I change anything inside the conf.d folder as well as in the nginx.conf file that's located at the same level with the conf.d folder.
The closest thing I've found was this tutorial here: https://cyral.com/blog/how-to-auto-reload-nginx/
But I had to adapt the paths a bit, I don't know what openresty is, I suppose it's a custom image or something? (Docker noob here)... Anyway, I've tried the following from that link:
Created the docker-entrypoint.sh and nginxReloader.sh files:
docker-entrypoint.sh:
#!/bin/bash
###########
sh -c "nginxReloader.sh &"
exec "$#"
nginxReloader.sh:
#!/bin/bash
###########
while true
do
inotifywait --exclude .swp -e create -e modify -e delete -e move /etc/nginx/conf.d
nginx -t
if [ $? -eq 0 ]
then
echo "Detected Nginx Configuration Change"
echo "Executing: nginx -s reload"
nginx -s reload
fi
done
And added this to Dockerfile:
# [...]
COPY ./nginxReloader.sh /usr/local/bin/nginxReloader.sh
COPY ./docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
RUN chmod +x /usr/local/bin/nginxReloader.sh
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
RUN apt-get update && apt-get install -y --no-install-recommends apt-utils
RUN apt-get install inotify-tools -y
ENTRYPOINT [ "/usr/local/bin/docker-entrypoint.sh" ]
# CMD ["/usr/local/openresty/bin/openresty", "-g", "daemon off;"] (don't know what this would do, but I wouldn't know what to replace `openresty` with in my case, so I omitted this line from the tutorial at the link I provided)
But when trying to docker-compose up --build it either errored with No such file or directory for line exec "$#" in the nginxReloader.sh file OR I got nginx exited with code 0 when doing docker-compose up (of course, I tried different things between those errors, but can't remember exactly what).
Also, I've tried to point the ENTRYPOINT in the Dockerfile directly to nginxReloader.sh (ENTRYPOINT [ "/usr/local/bin/nginxReloader.sh" ]) but then when trying docker-compose up I only get 2 lines of output:
Setting up watches.
Watches established.
and nginx never starts (I suppose it's because of that while true loop).
Also, if I completely remove the ENTRPOINT line in Dockerfile, when running docker-compose up I still get the following output:
my-site_nginx | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
my-site_nginx | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
my-site_nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
my-site_nginx | 10-listen-on-ipv6-by-default.sh: error: can not modify /etc/nginx/conf.d/default.conf (read-only file system?)
my-site_nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
my-site_nginx | /docker-entrypoint.sh: Configuration complete; ready for start up
like Docker is somehow aware of that file being in the folder, on the same level with the Dockerfile... No errors, but changing a config still doesn't trigger nginx -s reload
Your issue is that you are not running the original entrypoint when you override it with your new entrypoint, so nginx will not start. Change
docker-entrypoint.sh:
#!/bin/bash
###########
sh -c "nginxReloader.sh &"
exec "$#"
to
docker-entrypoint.sh:
#!/bin/bash
###########
sh -c "nginxReloader.sh &"
exec /docker-entrypoint.sh "$#"
I had the same problem and the accepted answer by #TarunLalwani is missing changes to the Dockerfile that are also required.
I wasn't able to edit their answer, so here is the updated solution that is verified to work, including some additional clarifications.
..................
There are 2 issues:
First, you are not running the original entrypoint when you override it with your new entrypoint. Change
docker-entrypoint.sh:
#!/bin/bash
###########
sh -c "nginxReloader.sh &"
exec "$#"
to
docker-entrypoint.sh:
#!/bin/bash
###########
sh -c "nginxReloader.sh &"
# See changes to Dockerfile next for more details, but
# you aren't including `CMD` in your Dockerfile. So
# forwarding the arguments ("$#") won't include "nginx",
# which is what the default nginx /docker-entrypoint.sh
# will look for before setting up the service.
exec /docker-entrypoint.sh "$#"
Second, as stated above, you also need to include CMD in your Dockerfile, or you'll still get nginx exited with code 0. Add:
Dockerfile
...
# same as before
ENTRYPOINT [ "/usr/local/bin/docker-entrypoint.sh" ]
# But include this, which is the same command as in the
# default nginx Dockerfile:
CMD [ "nginx", "-g", "daemon off;" ]
The line in the tutorial that you linked to but commented out in your Dockerfile version was:
CMD ["/usr/local/openresty/bin/openresty", "-g", "daemon off;"]
You're not using the openresty version of nginx, so you're just using "nginx". And it's a registered command so you don't need to use the path to the bin file like he's doing for openresty.
Related
I am trying to launch multiples startup scripts where it automates my ci/cd tasks, but however, I am just getting the response of the entry point.sh, how can I force to execute other scripts?
entrypoint.sh
#!/bin/sh
IFS=$',\n' ## set IFS to break on comma or newline
for host in $HOSTS; do
## mkdir -p "letsencrypt/live/${host}/fullchain.pem"
echo "mkdir -p letsencrypt/live/${host}/fullchain.pem"
done
init-letsencrypt.sh
#!/bin/sh
echo "cool"
xxxxx:~/xx$ docker-compose logs nginx
Attaching to platform_nginx_1
nginx_1 | mkdir -p letsencrypt/live/domain.io/fullchain.pem
nginx_1 | mkdir -p letsencrypt/live/www.domain.io/fullchain.pem
nginx_1 | mkdir -p letsencrypt/live/api.domain.io/fullchain.pem
nginx_1 | mkdir -p letsencrypt/live/app.domain.io/fullchain.pem
FROM nginx:1.19.0-alpine
# Install certbot for letsencrypt certificates
RUN apk add --no-cache certbot
COPY . /etc/nginx/
# Directory needed for LetEncrypt certificates renewal
RUN mkdir /var/lib/certbot
# Add scripts and auto-renewal scripts
COPY ./bin/entrypoint.sh /entrypoint.sh
COPY ./bin/init-letsencrypt.sh /init-letsencrypt.sh
# Make them executable
RUN chmod +x /entrypoint.sh
RUN chmod +x /init-letsencrypt.sh
# Install certificates and launch
ENTRYPOINT /entrypoint.sh
To run something at build time, you should use RUN.
CMD and ENTRYPOINT are used to launch the main process of your container. A container is "just" a process that is encapsulated in namespaces, basically. A container runs until this process stops or dies. When you specify you entrypoint.sh as an entrypoint, you are saying that the main process of you container is this script. To say it differently : the only goal of this container is to execute this script then die
You should useRUN to launch both of your scripts, then CMD or ENTRYPOINT to launch your nginx (most probably ENTRYPOINT, you wilk get why if you read the docs ;))
I have a base container that has an ENTRYPOINT that runs as root:
Base container Dockerfile:
FROM docker.io/opensuse/leap:latest
# Add scripts to be executed during startup
COPY startup /startup
ADD https://example.com/install-ca-cert.sh /startup/startup.d/install-ca-cert-base.sh
RUN chmod +x /startup/* /startup/startup.d/*
# Add Tini
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--", "/startup/startup.sh"]
And a derived container that uses gosu to perform a root step down after the startup scripts have been run as root:
Derived container Dockerfile:
ADD ./gosu-entrypoint.sh /usr/local/bin/gosu-entrypoint.sh
RUN chmod +x /usr/local/bin/gosu-entrypoint.sh
ENTRYPOINT ["/usr/local/bin/gosu-entrypoint.sh"]
CMD ["whoami"]
gosu-entrypoint.sh:
#!/bin/bash
set -e
# Call original entrypoint (as root)
/tini -s /startup/startup.sh
# If GOSU_USER environment variable is set, execute the specified command as that user
if [ -n "$GOSU_USER" ]; then
useradd --shell /bin/bash --system --user-group --create-home #GOSU_USER
exec /usr/local/bin/gosu $GOSU_USER "$#"
else
# else GOSU_USER environment variable is not set, execute the specified command as the default (root) user
exec "$#"
fi
This all works fine, by setting the GOSU_USER env var and running the container, the startup scripts are executed as root, and the CMD is executed as GOSU_USER:
export GOSU_USER=jim
docker run my-derived-container
# outputs "jim"
...
unset GOSU_USER
docker run my-derived-container
# outputs root
However, I am trying to determine if the above approach (maybe modified) is able to work with the Kubernetes securityContext runAsUser and runAsGroup directives?
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
I think these directives are turned into the containerd equivalent of docker run --user=xxx:yyy, so as such, they wouldn't work, since this:
docker run --user $(id -u):$(id -g) my-derived-container
results in a permissions error due to the startup scripts not being run as root anymore.
I have seen examples of entrypoint.sh scripts that allow the container to be started with the --user flag, but not sure if thats something I can use or not, i.e. even if the --user flag is provided, I still need the startup scripts to run as root:
https://github.com/docker-library/redis/blob/master/5.0/docker-entrypoint.sh#L11
# allow the container to be started with `--user`
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
find . \! -user redis -exec chown redis '{}' +
exec gosu redis "$0" "$#"
fi
exec "$#"
Update: Looking again at the above redis example, I'm not sure if this does allow the container to be started with --user as it states, looking at the Dockerfile, redis-server is the CMD passed to the script ($1):
https://github.com/docker-library/redis/blob/master/5.0/Dockerfile#L118
CMD ["redis-server"]
and the redis user is just hardcoded in the above docker-entrypoint.sh:
I want to build my own custom docker image from nginx image.
I override the ENTRYPOINT of nginx with my own ENTERYPOINT file.
Which bring me to ask two questions:
I think I lose some commands from nginx by doing so. am I right? (like expose the port.. )
If I want to restart the nginx I run this commands: nginx -t && systemctl reload nginx. but the output is:
nginx: configuration file /etc/nginx/nginx.conf test is successful
/entrypoint.sh: line 5: systemctl: command not found
How to fix that?
FROM nginx:latest
WORKDIR /
RUN echo "deb http://ftp.debian.org/debian stretch-backports main" >> /etc/apt/sources.list
RUN apt-get -y update && \
apt-get -y install apt-utils && \
apt-get -y upgrade && \
apt-get -y clean
# I ALSO WANT TO INSTALL CERBOT FOR LATER USE (in my entrypoint file)
RUN apt-get -y install python-certbot-nginx -t stretch-backports
# COPY ./something ./tothisimage
# COPY ./something ./tothisimage
# COPY ./something ./tothisimage
# COPY ./something ./tothisimage
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["bash", "/entrypoint.sh"]
entrypoint.sh
echo "in entrypoint"
# I want to run some commands here...
# After I want to run nginx normally....
nginx -t && systemctl reload nginx
echo "after reload"
this will work using service command:
echo "in entrypoint"
# I want to run some commands here...
# After I want to run nginx normally....
nginx -t && service nginx reload
echo "after reload"
output:
in entrypoint
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Restarting nginx: nginx.
after reload
Commands like service and systemctl mostly just don't work in Docker, and you should totally ignore them.
At the point where your entrypoint script is running, it is literally the only thing that is running. That means you don't need to restart nginx, because it hasn't started the first time yet. The standard pattern here is to use the entrypoint script to do some first-time setup; it will be passed the actual command to run as arguments, so you need to tell it to run them.
#!/bin/sh
echo "in entrypoint"
# ... do first-time setup ...
# ...then run the command, nginx or otherwise
exec "$#"
(Try running docker run --rm -it myimage /bin/sh. You will get an interactive shell in a new container, but after this first-time setup has happened.)
The one thing you do lose in your Dockerfile is the default CMD from the base image (setting an ENTRYPOINT resets that). You need to add back that CMD:
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
You should keep the other settings from the base image, like ENV definitions and EXPOSEd ports.
The "systemctl" command is specific to some SystemD based operating system. But you do not have such a SystemD daemon running on PID 1 - so even if you install those packages it wont work.
You can only check in the nginx.service file which command the "reload" would execute for real. Or have something like the docker-systemctl-replacement script do it for you.
I'm trying to build docker-compose, but I'm getting this error:
ERROR: for indicaaquicombrold_mysqld_1 Cannot start service mysqld:
oci runtime error: container_linux.go:247: starting container process
caused "exec: \"/docker-entrypoint.sh\": permission denied"
ERROR: for mysqld Cannot start service mysqld: oci runtime error:
container_linux.go:247: starting container process caused "exec:
\"/docker-entrypoint.sh\": permission denied"
ERROR: Encountered errors while bringing up the project.
docker-compose.yml
version: '3'
services:
php:
build:
context: ./docker/php
image: indicaaqui.com.br:tag
volumes:
- ./src:/var/www/html/
- ./config/apache-config.conf:/etc/apache2/sites-enabled/000-default.conf
ports:
- "80:80"
- "443:443"
mysqld:
build:
context: ./docker/mysql
environment:
- MYSQL_DATABASE=db_indicaaqui
- MYSQL_USER=indicaqui
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./config/docker-entrypoint.sh:/docker-entrypoint.sh
- ./database/db_indicaaqui.sql:/docker-entrypoint-initdb.d/db_indicaaqui.sql
Dockerfile (php)
FROM php:5.6-apache
MAINTAINER Limup <limup#outlook.com>
CMD [ "php" ]
RUN docker-php-ext-install pdo_mysql
# Enable apache mods.
# RUN a2enmod php5.6
RUN a2enmod rewrite
# Expose apache.
EXPOSE 80
EXPOSE 443
# Use the default production configuration
# RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini"
RUN mv "$PHP_INI_DIR/php.ini-development" "$PHP_INI_DIR/php.ini"
# Override with custom opcache settings
# COPY ./../../config/php.ini $PHP_INI_DIR/conf.d/
# Manually set up the apache environment variables
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
# Update the PHP.ini file, enable <? ?> tags and quieten logging.
RUN sed -i "s/short_open_tag = Off/short_open_tag = On/" "$PHP_INI_DIR/php.ini"
RUN sed -i "s/error_reporting = .*$/error_reporting = E_ERROR | E_WARNING | E_PARSE/" "$PHP_INI_DIR/php.ini"
RUN a2dissite 000-default.conf
RUN chmod -R 777 /etc/apache2/sites-enabled/
WORKDIR /var/www/html/
# By default start up apache in the foreground, override with /bin/bash for interative.
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
Dockerfile (Mysql)
FROM mariadb:latest
RUN chmod -R 777 /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
Please, help me solve this problem!
Any ideas?
That is most likely a Linux file permission issue on config/docker-entrypoint.sh. If your host is Linux/Mac, you can run:
chmod 755 config/docker-entrypoint.sh
For more on linux permissions, here's a helpful article: https://www.linux.com/learn/understanding-linux-file-permissions
First, you need to copy entrypoint.sh file into other directory instead of same your source code (Eg. /home/entrypoint.sh), then grant permission to execute entrypoint script:
RUN ["chmod", "+x", "/home/entrypoint.sh"]
Solution
ENV USER root
ENV WORK_DIR_PATH /home
RUN mkdir -p $WORK_DIR_PATH && chown -R $USER:$USER $WORK_DIR_PATH
WORKDIR $WORK_DIR_PATH
Info
The USER instruction sets the user name (or UID) and optionally the user group (or GID) to use when running the image and for any RUN, CMD and ENTRYPOINT instructions that follow it in the Dockerfile.
The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction.
Links
chown command
docker builder reference
A pretty common solution if nothing works is to re-install Docker.. That's what ended up working for me after trying for like 5 hours everything under the sun in terms of permissions etc.
Right now, I am using a docker-compose file that contains, amongst other stuff, a few lines like this. This executes without any sort of problem. It deploys perfectly and I'm able to access the web server inside through the browser.
container:
command: bash -c "cd /code; chmod +x ./deploy/start_dev.sh; ./deploy/start_dev.sh;"
image: python:3.6
As I needed to be able to connect to the container through SSH I created a Dockerfile that installs it and modifies the config file so it allows unsafe root connections:
FROM python:3.6
RUN apt-get update && apt-get install openssh-server -y
RUN sed -i "s/PermitRootLogin without-password/PermitRootLogin yes/g" /etc/ssh/sshd_config
RUN sed -i "s/PermitEmptyPasswords no/PermitEmptyPasswords yes/g" /etc/ssh/sshd_config
RUN service ssh restart
RUN echo "root:sshpassword" | chpasswd
ENTRYPOINT ["/bin/sh", "-c"]
CMD ["/bin/bash"]
After that I changed the docker-compose file to:
container:
command: bash -c "cd /code; chmod +x ./deploy/start_dev.sh; ./deploy/start_dev.sh;"
build:
context: .
From this moment on, whenever I run docker-compose up I get the following output:
container exited with code 0
Is there something I am missing?
In your docker-compose.yaml file, add the following parameter (under the 'container' section):
tty: true
Solved it switching the last two lines of the Dockerfile
ENTRYPOINT ["/bin/sh", "-c"]
CMD ["/bin/bash"]
to
CMD ["/bin/bash", "-c", "/bin/bash"]