Docker - Supervisord container with Nginx (sudo user) - docker

I have created a base image with supervisord installed.
Summary of steps:
FROM ubuntu:20.04
Then I installed some base utilities (time zone/nano/sudo/zip etc)
FROM current_timezone/base-utils:1.04
Then I created a base supervisord image including a user with sudo privileges and password.
RUN apt-get update \
&& groupadd ${DOCKER_CONTAINER_WEBGROUP} -f \
&& useradd -m -s $(which bash) -G sudo ${DOCKER_CONTAINER_USERNAME} \
&& echo "${DOCKER_CONTAINER_USERNAME}:${DOCKER_CONTAINER_PASSWORD}" | chpasswd \
&& usermod -aG www-data ${DOCKER_CONTAINER_USERNAME}
So in any Docker image deriving from this I can run supervisor :
USER ${DOCKER_CONTAINER_USERNAME}
CMD ["/usr/bin/supervisord"]`
So, I have Dockerfile entries for my images deriving from this image :
Apache
Nginx
Varnish
etc
Most of the applications can launch with supervisord like this:
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
autorestart=false
startretries=0
But Nginx doesn't launch, the error:
the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
So I created this and thought I would get an input prompt once the container starts: (the objective was to receive input prompt when container starts so that password can be sent to sudo -S to start Nginx)
[program:nginx]
command=sudo -K && read -s -p "Nginx requires a super-privileges (sudo user) to start - Please enter password for your sudo user: " TMP_PW && echo $TMP_PW | sudo -S service nginx start && unset TMP_PW
user=userdefinedinstagesupwards
Running that command above in command-line once I am inside the container already (docker exec -ti container_nginx bash) works, and I can input password from command-line.
The Issues
Nginx does not start automatically, and I have to enter container to start Nginx manually.
NOTE: I have seen the docker nginx image
docker run -d -v $PWD/nginx.conf:/etc/nginx/nginx.conf nginx but this only has Nginx - I have some tools I would like to reuse (as explained above I created an image that has those installed) which means I would have to recreate the steps backwards just for Nginx.
Additional information
As requested below by users, the reasoning why I am using supervisord like this is because I run multiple scripts (debug info/dynamic paths/secrets) and the main application (eg. Apache/Nginx/Varnish) etc alongside.
A simple example: Apache web-server with two files (tried to make a brief example):
When supervisord initializes (CMD ["/usr/bin/supervisord"]) the main application starts, and the helper scripts (in this example some environment variables built from parent images). I can then access all output in /var/log/supervisor/app-stdout(or stderr)* as required.
For instance: I then have information on ${INSTALLED_BASE_APPS_TEXT} available which tells me which apps my base-utils are installed. If I ever see I need to add another tool, for argument here let's say htop, I can go and update the parent image and rebuild this child stage later. Some tools I would always like to be available regardless of which container is running - nano,zip etc are things permanently used by me.
supervisor/conf.d/config-webserver.conf
[supervisord]
nodaemon=true
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
autorestart=false
startretries=0
supervisor/conf.d/config-information.conf
[program:echo]
command=/bin/bash -c "echo Loaded Supervisord program 'echo' - Stage 5 operation \(Custom Nginx supervisord config\)"
autorestart=false
startretries=1
[program:echo_base_utils]
command=/bin/bash -c "echo ${INSTALLED_BASE_APPS_TEXT}"
autorestart=false
startretries=0
[program:echo_test_item]
command=/bin/bash -c "echo ${ENV_TEST_ITEM}"
autorestart=false
startretries=0
QUESTION
Is there any way that supervisord commands can be made so that they prompt for input as soon as container starts? I would like to keep using the images described above.

Related

How to solve file permission issues when developing with Apache HTTP server in Docker?

My Dockerfile extends from php:8.1-apache. The following happens while developing:
The application creates log files (as www-data, 33:33)
I create files (as the image's default user root, 0:0) within the container
These files are mounted on my host where I'm acting as user (1000:1000). Of course I'm running into file permission issues now. I'd like to update/delete files created in the container on my host and vice versa.
My current solution is to set the image's user to www-data. In that way, all created files belong to it. Then, I change its user and group id from 33 to 1000. That solves my file permission issues.
However, this leads to another problem:
I'm prepending sudo -E to the entrypoint and command. I'm doing that because they're normally running as root and my custom entrypoint requires root permissions. But in that way the stop signal stops working and the container has to be killed when I want it to stop:
~$ time docker-compose down
Stopping test_app ... done
Removing test_app ... done
Removing network test_default
real 0m10,645s
user 0m0,167s
sys 0m0,004s
Here's my Dockerfile:
FROM php:8.1-apache AS base
FROM base AS dev
COPY entrypoint.dev.sh /usr/local/bin/custom-entrypoint.sh
ARG user_id=1000
ARG group_id=1000
RUN set -xe \
# Create a home directory for www-data
&& mkdir --parents /home/www-data \
&& chown --recursive www-data:www-data /home/www-data \
# Make www-data's user and group id match my host user's ones (1000 and 1000)
&& usermod --home /home/www-data --uid $user_id www-data \
&& groupmod --gid $group_id www-data \
# Add sudo and let www-data execute it without asking for a password
&& apt-get update \
&& apt-get install --yes --no-install-recommends sudo \
&& rm --recursive --force /var/lib/apt/lists/* \
&& echo "www-data ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/www-data
USER www-data
# Run entrypoint and command as sudo, as my entrypoint does some config substitution and both normally run as root
ENTRYPOINT [ "sudo", "-E", "custom-entrypoint.sh" ]
CMD [ "sudo", "-E", "apache2-foreground" ]
Here's my custom-entrypoint.sh
#!/bin/sh
set -e
sed --in-place 's#^RemoteIPTrustedProxy.*#RemoteIPTrustedProxy '"$REMOTEIP_TRUSTED_PROXY"'#' $APACHE_CONFDIR/conf-available/remoteip.conf
exec docker-php-entrypoint "$#"
What do I need to do to make the container catch the stop signal (it is SIGWINCH for the Apache server) again? Or is there a better way to handle the file permission issues, so I don't need to run the entrypoint and command with sudo -E?
What do I need to do to make the container catch the stop signal (it is SIGWINCH for the Apache server) again?
First, get rid of sudo, if you need to be root in your container, run it as root with USER root in your Dockerfile. There's little value add to sudo in the container since it should be an environment to run one app and not a multi-user general purpose Linux host.
Or is there a better way to handle the file permission issues, so I don't need to run the entrypoint and command with sudo -E?
The pattern I go with is to have developers launch the container as root, and have the entrypoint detect the uid/gid of the mounted volume, and adjust the uid/gid of the user in the container to match that id before running gosu to drop permissions and run as that user. I've included a lot of this logic in my base image example (note the fix-perms script that tweaks the uid/gid). Another example of that pattern is in my jenkins-docker image.
You'll still need to either configure root's login shell to automatically run gosu inside the container, or remember to always pass -u www-data when you exec into your image, but now that uid/gid will match your host.
This is primarily for development. In production, you probably don't want host volumes, use named volumes instead, or at least hardcode the uid/gid of the user in the image to match the desired id on the production hosts. That means the Dockerfile would still have USER www-data but the docker-compose.yml for developers would have user: root that doesn't exist in the compose file in production. You can find a bit more on this in my DockerCon 2019 talk (video here).
You can use user namespace to map different user/group in your docker to you on the host.
For example, the group www-data/33 in the container could be the group docker-www-data/100033 on the host, you just have be in the group to access log files.

Why is Docker CMD running as chronos in GKE?

I have a pod and NodePort service running on GKE.
In the Dockerfile for the container in my pod, I'm using gosu to run a command as a specific user:
startup.sh
exec /usr/local/bin/gosu mytestuser "$#"
Dockerfile
FROM ${DOCKER_HUB_PUBLIC}/opensuse/leap:latest
# Download and verify gosu
RUN gpg --batch --keyserver-options http-proxy=${env.HTTP_PROXY} --keyserver hkps://keys.openpgp.org \
--recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 && \
curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64" && \
curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64.asc" && \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu && \
chmod +x /usr/local/bin/gosu
# Add tini
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--", "/startup/startup.sh"]
# Add mytestuser
RUN useradd mytestuser
# Run startup.sh which will use gosu to execute following `CMD` as `mytestuser`
RUN /startup/startup.sh
CMD ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/helloworld.jar"]
I've just noticed that when I log into the container on GKE and look at the processes running, the java process that I would expect to be running as mytestuser is actually running as chronos:
me#gke-cluster-1-default-ool-1234 ~ $ ps aux | grep java
root 9551 0.0 0.0 4296 780 ? Ss 09:43 0:00 /tini -- /startup/startup.sh java -Djava.security.egd=file:/dev/./urandom -jar /helloworld.jar
chronos 9566 0.6 3.5 3308988 144636 ? Sl 09:43 0:12 java -Djava.security.egd=file:/dev/./urandom -jar /helloworld.jar
Can anyone explain what's happening, i.e. who is the chronos user, and why my process is not running as mytestuser?
When you RUN adduser, it assigns a user ID in the image's /etc/passwd file. Your script launches the process using that numeric user ID. When you subsequently run ps from the host, though, it looks up that user ID in the host's /etc/passwd file, and gets something different.
This difference doesn't usually matter. Only the numeric user ID matters for things like filesystem permissions, if you're bind-mounting a directory from the host. For security purposes it's important that the numeric user ID not be 0, but that's pretty universally named root.
When you run a useradd inside the container (or as part of the image build), it adds am entry to the /etc/passwd inside the container. The uid/gid will be in a shared namespace with the host, unless you enable user namespaces. However the mapping of those ids to names will be specific to the filesystem namespace where the process is running. Therefore in this scenario, the uid of mytestuser inside the container happens to be the same uid as chronos on the host.

Inject SSH key into a Docker container

I am trying to find a "global" solution for injecting an SSH key into a container. I know that there are several solutions including docker build kit and so on...but I don't want to build an image and inject the SSH key. I want to inject the SSH key by using an existing image with docker compose.
I use the following docker compose file:
version: '3.1'
services:
server1:
image: XXXXXXX
container_name: server1
command: bash -c "/root/init.sh && python3 /root/my_python.py"
environment:
- MANAGED_HOST=mserver
volumes:
- ./init.sh:/root/init.sh
secrets:
- id_rsa
secrets:
id_rsa:
file: /home/user/.ssh/id_rsa
The init.sh is as follows:
#!/bin/bash
eval "$(ssh-agent -s)" > /dev/null
if [ ! -d "/root/.ssh/" ]; then
mkdir /root/.ssh
ssh-keyscan $MANAGED_HOST > /root/.ssh/known_hosts
fi
ssh-add -k /run/secrets/id_rsa
If I run docker compose with the parameter command
bash -c "/root/init.sh && python3 /root/my_python.py", then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is not working.
An agent process is running:
root 8 1 0 12:50 ? 00:00:00 ssh-agent -s
known_hosts is OK:
root#c67655d87ced:~# cat /root/.ssh/known_hosts
BLABLABLA ssh-rsa AAAAB3BLABLABLA....
and the agent is running, but the private key is not added:
root#c67655d87ced:~# ssh-add -l
Could not open a connection to your authentication agent.
Now, if I log in the container (docker exec -it server1 /bin/bash) and run the commands from init.sh one by one from the command line, then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is working?!?
Any idea, how I can get it working by using the docker compose?
It should be enough to cause the file $HOME/.ssh/id_rsa to exist with appropriate permissions; you don't need an ssh agent running.
#!/bin/sh
if ! [ -d "$HOME/.ssh" ]; then
mkdir "$HOME/.ssh"
fi
chmod 0700 "$HOME/.ssh"
if [ -n "$MANAGED_HOST" ]; then
ssh-keyscan "$MANAGED_HOST" >> "$HOME/.ssh/known_hosts"
fi
if [ -f /run/secrets/id_rsa ]; then
cp /run/secrets/id_rsa "$HOME/.ssh/id_rsa"
chmod 0400 "$HOME/.ssh/id_rsa"
fi
# exec "$#"
A typical pattern is to use the Dockerfile ENTRYPOINT to do first-time setup tasks like this. That will get passed the CMD as arguments, and the commented exec "$#" line at the end of the file runs that as a command. You'd set this up in your image's Dockerfile like:
FROM XXXXXX
...
# Script must be executable on the host, and must start with a
# #!/bin/sh "shebang" line
COPY init.sh /root
# MUST use JSON-array form
ENTRYPOINT ["/root/init.sh"]
# Can use any Dockerfile syntax
CMD ["python3", "/root/my_python.py"]
In your specific example, you're launching init.sh as a subprocess. The ssh-agent setup sets some environment variables, like $SSH_AUTH_SOCK, but when these run as a subprocess they don't get propagated back out to the host process. You can use the standard POSIX shell . builtin (the bash source builtin is equivalent, but non-standard) to cause those environment variables to be set in the context of the parent shell:
command: sh -c ". /root/init.sh && exec python3 /root/my_python.py"
The exec replaces the shell wrapper with the Python script, which you generally want. This will also wind up being the parent process of ssh-agent, which could potentially surprise your process if it happens to exit.

How can I run script automatically after Docker container startup

I'm using Search Guard plugin to secure an elasticsearch cluster composed of multiple nodes.
Here is my Dockerfile:
#!/bin/sh
FROM docker.elastic.co/elasticsearch/elasticsearch:5.6.3
USER root
# Install search guard
RUN bin/elasticsearch-plugin install --batch com.floragunn:search-guard-5:5.6.3-16 \
&& chmod +x \
plugins/search-guard-5/tools/hash.sh \
plugins/search-guard-5/tools/sgadmin.sh \
bin/init_sg.sh \
&& chown -R elasticsearch:elasticsearch /usr/share/elasticsearch
USER elasticsearch
To initialize SearchGuard (create internal users and assign roles). I need to run the script init_sg.sh after the container startup.
Here is the problem: Unless elasticsearch is running, the script will not initialize any security index.
The script's content is :
sleep 10
plugins/search-guard-5/tools/sgadmin.sh -cd config/ -ts config/truststore.jks -ks config/kirk-keystore.jks -nhnv -icl
Now, I just run the script manually after the container startup but since I'm running it on Kubernetes.. Pods may get killed or fail and get recreated automatically for some reason. In this case, the plugin have to be initialized automatically after the container startup!
So how to accomplish this? Any help or hint would be really appreciated.
The image itself has an entrypoint ENTRYPOINT ["/run/entrypoint.sh"] specified in the Dockerfile. You can replace it by your own script. So for example create a new script, mount it and first call /run/entrypoint.sh and then wait for start of elasticsearch before running your init_sg.sh.
Not sure this will solves your problem, but its worth check my repo'sDockerfile
I have created a simple run.sh file copied to docker image and in the Dockerfile I wrote CMD ["run.sh"]. In the same way define whatever you want in run.sh and write CMD ["run.sh"]. You can find another example like below
Dockerfile
FROM java:8
RUN apt-get update && apt-get install stress-ng -y
ADD target/restapp.jar /restapp.jar
COPY dockerrun.sh /usr/local/bin/dockerrun.sh
RUN chmod +x /usr/local/bin/dockerrun.sh
CMD ["dockerrun.sh"]
dockerrun.sh
#!/bin/sh
java -Dserver.port=8095 -jar /restapp.jar &
hostname="hostname: `hostname`"
nohup stress-ng --vm 4 &
while true; do
sleep 1000
done
This is addressed in the documentation here: https://docs.docker.com/config/containers/multi-service_container/
If one of your processes depends on the main process, then start your helper process FIRST with a script like wait-for-it, then start the main process SECOND and remove the fg %1 line.
#!/bin/bash
# turn on bash's job control
set -m
# Start the primary process and put it in the background
./my_main_process &
# Start the helper process
./my_helper_process
# the my_helper_process might need to know how to wait on the
# primary process to start before it does its work and returns
# now we bring the primary process back into the foreground
# and leave it there
fg %1
I was trying to solve the exact problem. Here's the approach that worked for me.
Create a separate shell script that checks for ES status, and only start initialization of SG when ES is ready:
Shell Script
#!/bin/sh
echo ">>>> Right before SG initialization <<<<"
# use while loop to check if elasticsearch is running
while true
do
netstat -uplnt | grep :9300 | grep LISTEN > /dev/null
verifier=$?
if [ 0 = $verifier ]
then
echo "Running search guard plugin initialization"
/elasticsearch/plugins/search-guard-6/tools/sgadmin.sh -h 0.0.0.0 -cd plugins/search-guard-6/sgconfig -icl -key config/client.key -cert config/client.pem -cacert config/root-ca.pem -nhnv
break
else
echo "ES is not running yet"
sleep 5
fi
done
Install script in Dockerfile
You will need to install the script in container so it's accessible after it starts.
COPY sginit.sh /
RUN chmod +x /sginit.sh
Update entrypoint script
You will need to edit the entrypoint script or run script of your ES image. So that it starts the sginit.sh in the background BEFORE starting ES process.
# Run sginit in background waiting for ES to start
/sginit.sh &
This way the sginit.sh will start in the background, and will only initialize SG after ES is started.
The reason to have this sginit.sh script starts before ES in the background is so that it's not blocking ES from starting. The same logic applies if you put it after starting of ES, it will never run unless you put the starting of ES in the background.
I would suggest to put the CMD in you docker file to execute the script when the container start
FROM debian
RUN apt-get update && apt-get install -y nano && apt-get clean
EXPOSE 8484
CMD ["/bin/bash", "/opt/your_app/init.sh"]
There is other way , but before using this look at your requirement,
ENTRYPOINT "put your code here" && /bin/bash
#exemple ENTRYPOINT service nginx start && service ssh start &&/bin/bash "use && to separate your code"
You can also use wait-for-it script. It will wait on the availability of a host and TCP port. It is useful for synchronizing the spin-up of interdependent services and works like a charm with containers. It does not have any external dependencies so you can just run it as an RUN command without doing anything else.
A Dockerfile example based on this thread:
FROM elasticsearch
# Make elasticsearch write data to a folder that is not declared as a volume in elasticsearchs' official dockerfile.
RUN mkdir /data && chown -R elasticsearch:elasticsearch /data && echo 'es.path.data: /data' >> config/elasticsearch.yml && echo 'path.data: /data' >> config/elasticsearch.yml
# Download wait-for-it
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/e1f115e4ca285c3c24e847c4dd4be955e0ed51c2/wait-for-it.sh /utils/wait-for-it.sh
# Copy the files you may need and your insert script
# Insert data into elasticsearch
RUN /docker-entrypoint.sh elasticsearch -p /tmp/epid & /bin/bash /utils/wait-for-it.sh -t 0 localhost:9200 -- path/to/insert/script.sh; kill $(cat /tmp/epid) && wait $(cat /tmp/epid); exit 0;

How to deal with state "Exit 0" in Docker

I have build a Docker image and afterwards run a container using Docker Compose. The following command will do the job for me:
docker-compose up -d
I have restarted the PC and now I want to start the previous container that I've created before. So I have tried the following command:
$ docker-compose start
Starting php-apache ... done
Apparently it works but it doesn't as per the output for the following command:
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------
php55devwork_php-apache_1 /bin/sh -c bash -C '/usr/l ... Exit 0
For sure something is wrong and I am trying to find out what.
How do I find why the command is failing?
Is there any place where I could see a log file or something that help me to identify and fix the error?
Here is the repository if you want to give it a try.
Update
If I remove the container: docker rm <container-id> and recreate it by running docker-compose up -d --build it works again.
Update #1
I am not able to see such weird characters:
This is what helped me to resolve this issue:
Under one of your services in the docker-compose yaml file, type in the following:
tty: true so it'll look like
version: '3'
services:
web:
tty: true
Hopefully this helps someone; thumps up if it helps you :)
I took a look into your Docker github and setup_php_settings
on line (line n. 27) there is source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND
and that runs apache2 on foreground so it shouldn't exit with status code 0.
But it seems to me like your setup_php_settings contains some weird character (when I run your image with compose)
(original is one on right side) weird character
I have changed it to new lines and it worked for me. Let us know if it helped.
If you want to debug your docker container you can run it without entrypoint like:
docker run -it yourImage bash
-- AFTER some investigation:
There were still some errors when I restart docker container - like in your case stopped container and start after reboot. There were problems: symbolic links already exist and apache2 has grumpy PID so we need to do something like in oficial php docker
This is full setup_php_settings worked for me after container restart.
#!/bin/bash -x
set -e
PHP_ERROR_REPORTING=${PHP_ERROR_REPORTING:-"E_ALL & ~E_DEPRECATED & ~E_NOTICE"}
sed -ri 's/^display_errors\s*=\s*Off/display_errors = On/g' /etc/php5/apache2/php.ini
sed -ri 's/^display_errors\s*=\s*Off/display_errors = On/g' /etc/php5/cli/php.ini
sed -ri "s/^error_reporting\s*=.*$//g" /etc/php5/apache2/php.ini
sed -ri "s/^error_reporting\s*=.*$//g" /etc/php5/cli/php.ini
echo "error_reporting = $PHP_ERROR_REPORTING" >> /etc/php5/apache2/php.ini
echo "error_reporting = $PHP_ERROR_REPORTING" >> /etc/php5/cli/php.ini
mkdir -p /data/tmp/php/uploads
mkdir -p /data/tmp/php/sessions
mkdir -p /data/tmp/php/xdebug
chown -R www-data:www-data /data/tmp/php*
ln -sf /etc/php5/mods-available/zz-php.ini /etc/php5/apache2/conf.d/zz-php.ini
ln -sf /etc/php5/mods-available/zz-php-directories.ini /etc/php5/apache2/conf.d/zz-php-directories.ini
# Add symbolic link to get Zend out of the current install dir
ln -sf /usr/share/php/libzend-framework-php/Zend/ /usr/share/php/Zend
a2enmod rewrite
php5enmod mcrypt
# Apache gets grumpy about PID files pre-existing
: "${APACHE_PID_FILE:=${APACHE_RUN_DIR:=/var/run/apache2}/apache2.pid}"
rm -f "$APACHE_PID_FILE"
source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND "$#"
You can check logs with docker compose logs.
Looking through your repo, you have
ENTRYPOINT bash -C '/usr/local/bin/setup_php_settings';'bash'
which, without an interactive session, bash will exit immediately (with an exit code 0) after reading the end of file on stdin.
Normally getting an exit 0 should be a reason to celebrate, as it indicates that your command has ended successfully (http://www.tldp.org/LDP/abs/html/exit-status.html).
Having had a look at your Dockerfile it looks like, your just invoking bash in your entry point which then for sure will exit (as it is non blocking). In order to serve some data, you should rather be calling php (which is a blocking operation that keeps the container up), like done in the official docker files for php (see the CMD ["php", "-a"] at https://github.com/docker-library/php/blob/1c56325a69718a3e3cf76179e75d070b7e23da62/5.6/Dockerfile)

Resources