How to restart host nginx from inside docker after certbot renew - docker

If you have a docker container running certbot, but a nginx instance usign those certificates running on the host, how do you restart the host nginx from inside the docker container?
This is the running container
certbot:
image: certbot/dns-ovh
container_name: certbot
volumes:
- /etc/letsencrypt/:/etc/letsencrypt
- /var/lib/letsencrypt:/var/lib/letsencrypt
- /root/.secrets/certbot-ovh.ini:/root/.secrets/ovh-creds.ini
entrypoint: /bin/sh -c 'trap exit TERM; while :; do certbot renew sleep 12h & wait $${!}; done;'

You have to add a --post-hook to the renew command, which uses ssh to send the nginx reload command to the host.
For this to work, the container needs to be run with network_mode: "host"
then you need to isntall sshpass and openssh when starting/recreating the container. this is done with
apk add openssh sshpass
then in the post-hook you need to ssh into the host and reload nginx
sshpass -p 'your password' ssh -o 'StrictHostKeyChecking no' root#localhost 'systemctl reload nginx'
assuming you have root access. This uses sshpass to enter the password in ssh which skips the "do you want to add the fingerprint" message and sends the relaod command to localhost
putting this all into the docker-compose file looks like this:
certbot:
image: certbot/dns-ovh
container_name: certbot
network_mode: "host"
volumes:
- /etc/letsencrypt/:/etc/letsencrypt
- /var/lib/letsencrypt:/var/lib/letsencrypt
- /root/.secrets/certbot-ovh.ini:/root/.secrets/ovh-creds.ini
entrypoint: >
/bin/sh -c
'apk add openssh sshpass &&
trap exit TERM; while :;
do certbot renew --post-hook
"sshpass -p '"'"'your password'"'"' ssh -o '"'"'StrictHostKeyChecking no'"'"' root#localhost '"'"'systemctl reload nginx'"'"'";
sleep 12h & wait $${!}; done;'
the > here allows for writing as many indented lines as i want, without needing to add anotehr layer of escaping. it also combines the lines into one line later.
the '"'"' used here is used to escape the singe ' inside the --post-hook quotes, it closes the first single quote, opens a new double quote which contains a single quote, and then opens the single quote again.

Related

Commands in Dockerfile and commands in docker-compose

I'm using the jonasal/nginx-certbot:latest to run several services under NGINX with automatic certificate generation.
I now want to create a second instance for UAT testing where all domains shift from [subdomain].domain.com to [subdomain]-uat.domain.com. To enable this I have substituted the domains by environment variables and used the following command in the docker-compose.yaml file:
command: /bin/bash -c "envsubst < /etc/nginx/user_conf.d/ssl-server.conf.template > /etc/nginx/user_conf.d/ssl-server.conf && nginx -g 'daemon off;'
This produces the correct ssl_server.conf file but does not seem to start the certbot process.
I'm trying to figure out if using a command in the docker-compose.yaml overrules the RUN command in the Dockerfile but cannot find anything to say that.
I'm also trying to trigger the run command from within the docker-compose.yaml file like.
command: /bin/bash -c "envsubst < /etc/nginx/user_conf.d/ssl-server.conf.template > /etc/nginx/user_conf.d/ssl-server.conf && nginx -g 'daemon off;' && /scripts/start_nginx_certbot.sh"
But this does not make a difference.
What am I missing?
Your command overrides the command in the image, so you need to run the startup script yourself as you show in your last command.
I think your error in that is that you also run nginx. When you do it that way, your 'base' nginx starts and since it doesn't exit, the normal startup script is never run. Try this
command: /bin/bash -c "envsubst < /etc/nginx/user_conf.d/ssl-server.conf.template > /etc/nginx/user_conf.d/ssl-server.conf && /scripts/start_nginx_certbot.sh"

Inject SSH key into a Docker container

I am trying to find a "global" solution for injecting an SSH key into a container. I know that there are several solutions including docker build kit and so on...but I don't want to build an image and inject the SSH key. I want to inject the SSH key by using an existing image with docker compose.
I use the following docker compose file:
version: '3.1'
services:
server1:
image: XXXXXXX
container_name: server1
command: bash -c "/root/init.sh && python3 /root/my_python.py"
environment:
- MANAGED_HOST=mserver
volumes:
- ./init.sh:/root/init.sh
secrets:
- id_rsa
secrets:
id_rsa:
file: /home/user/.ssh/id_rsa
The init.sh is as follows:
#!/bin/bash
eval "$(ssh-agent -s)" > /dev/null
if [ ! -d "/root/.ssh/" ]; then
mkdir /root/.ssh
ssh-keyscan $MANAGED_HOST > /root/.ssh/known_hosts
fi
ssh-add -k /run/secrets/id_rsa
If I run docker compose with the parameter command
bash -c "/root/init.sh && python3 /root/my_python.py", then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is not working.
An agent process is running:
root 8 1 0 12:50 ? 00:00:00 ssh-agent -s
known_hosts is OK:
root#c67655d87ced:~# cat /root/.ssh/known_hosts
BLABLABLA ssh-rsa AAAAB3BLABLABLA....
and the agent is running, but the private key is not added:
root#c67655d87ced:~# ssh-add -l
Could not open a connection to your authentication agent.
Now, if I log in the container (docker exec -it server1 /bin/bash) and run the commands from init.sh one by one from the command line, then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is working?!?
Any idea, how I can get it working by using the docker compose?
It should be enough to cause the file $HOME/.ssh/id_rsa to exist with appropriate permissions; you don't need an ssh agent running.
#!/bin/sh
if ! [ -d "$HOME/.ssh" ]; then
mkdir "$HOME/.ssh"
fi
chmod 0700 "$HOME/.ssh"
if [ -n "$MANAGED_HOST" ]; then
ssh-keyscan "$MANAGED_HOST" >> "$HOME/.ssh/known_hosts"
fi
if [ -f /run/secrets/id_rsa ]; then
cp /run/secrets/id_rsa "$HOME/.ssh/id_rsa"
chmod 0400 "$HOME/.ssh/id_rsa"
fi
# exec "$#"
A typical pattern is to use the Dockerfile ENTRYPOINT to do first-time setup tasks like this. That will get passed the CMD as arguments, and the commented exec "$#" line at the end of the file runs that as a command. You'd set this up in your image's Dockerfile like:
FROM XXXXXX
...
# Script must be executable on the host, and must start with a
# #!/bin/sh "shebang" line
COPY init.sh /root
# MUST use JSON-array form
ENTRYPOINT ["/root/init.sh"]
# Can use any Dockerfile syntax
CMD ["python3", "/root/my_python.py"]
In your specific example, you're launching init.sh as a subprocess. The ssh-agent setup sets some environment variables, like $SSH_AUTH_SOCK, but when these run as a subprocess they don't get propagated back out to the host process. You can use the standard POSIX shell . builtin (the bash source builtin is equivalent, but non-standard) to cause those environment variables to be set in the context of the parent shell:
command: sh -c ". /root/init.sh && exec python3 /root/my_python.py"
The exec replaces the shell wrapper with the Python script, which you generally want. This will also wind up being the parent process of ssh-agent, which could potentially surprise your process if it happens to exit.

Can the kafka connectors be configured via env variables passed when launching docker? Or curl is the only way?

This is the docker image we use to host docker-connect with the plugins
FROM confluentinc/cp-kafka-connect:5.3.1
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/name/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
I run this docker via docker-compose and then I have specified some common env variables defined here https://docs.confluent.io/current/installation/docker/config-reference.html#kafka-connect-configuration
But I also would like to specify connector related config from the env variable also, example I have done this
- CONNECT_NAME=snmp-connector
- CONNECT_CONNECTOR_CLASS=com.github.jcustenborder.kafka.connect.snmp.SnmpTrapSourceConnector
- CONNECT_TOPIC=fm_snmp
What I am trying to do it, instead of calling
curl -X POST -H "Content-Type: application/json" --data '{"name":"","config":{"connector.class":"com.github.jcustenborder.kafka.connect.snmp.SnmpTrapSourceConnector","topic":"fm_snmp"}}' http://localhost:8083/connectors
I want to just specify it via env variables, BUT!! unfortunately its not working. So when I try seeing list of active connectors curl -localhost:8083/connectors/ , then I dont see it listed there.
So finally, my question can I configure it via env variables or only curl is the way?
You can't pass it as environment variables, but you can specify it as part of your Docker startup by passing in a custom command. Here's an example of doing it with Docker Compose. If you're calling docker run itself you'd need to rework this into an appropriate structure:
kafka-connect:
image: confluentinc/cp-kafka-connect:5.3.1
environment:
CONNECT_REST_PORT: 18083
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
[…]
volumes:
- $PWD/scripts:/scripts
command:
- bash
- -c
- |
/etc/confluent/docker/run &
echo "Waiting for Kafka Connect to start listening on kafka-connect ⏳"
while [ $$(curl -s -o /dev/null -w %{http_code} http://kafka-connect:8083/connectors) -eq 000 ] ; do
echo -e $$(date) " Kafka Connect listener HTTP state: " $$(curl -s -o /dev/null -w %{http_code} http://kafka-connect:8083/connectors) " (waiting for 200)"
sleep 5
done
nc -vz kafka-connect 8083
echo -e "\n--\n+> Creating Kafka Connect Elasticsearch sink"
/scripts/create-es-sink.sh
sleep infinity
This calls a connector script, but if you want to embed it directly you can do it like this.

User permission problems when retrieving certificates with docker certbot container for nginx

I realised how badly written this question was so I have rewritten the whole ting together with a solution.
TLDR: I wanted a solution or suggestion on how to get letsencrypt certificates and keys retrieved by the docker certbot/certbot container to be readable by the nginx:latest container.
The reason it is not readable is that the certificates are stored in a folder, typically /etc/letsencrypt/archive/domain/certificates and the folder archive has owner set to root and group set to root with the mode 0700. In addition, the key also has owner set to root and group set to root with mode 0600.
The nginx container has pid 0 set to be nginx master process and run by root, but it spawns a worker process which need to read the certificates and keys. This worker process is owned by a unprivileged user.
DOCKER-COMPOSE config
---
version: '3'
services:
nginx:
container_name: nginx
image: nginx:latest
ports:
- 80:80
- 443:443
volumes:
- ./data/nginx/conf:/etc/nginx/conf.d
# The volume under is to provide the DHPARAM file.
- ./data/nginx/tls:/etc/pki/tls
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
# This reloads the certificates every 24h as long as the container is running
command: "/bin/sh -c 'while :; do sleep 24h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
# certbot:
# container_name: certbot
# image: certbot/certbot
# volumes:
# - ./data/certbot/conf:/etc/letsencrypt
# - ./data/certbot/www:/var/www/certbot
# depends_on:
# - nginx
# # This checks if the certificates need to be renewed every 12 hours.
# entrypoint: "/bin/sh -c \"trap exit TERM; while :; do certbot renew; #sleep 12h & wait $${!}; done;\""
NGINX config
server {
listen 80 default_server;
server_name _;
location /.well-known/acme-challenge/ {
allow all;
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
I have excluded unnecessary lines in the config. After doing the initial retrival of the certificates I will remove the comments in the yaml file so that the certbot container retrieves new certificates automatically the next time I do docker-compose up -d.
The command I ran after starting the nginx container.
docker run -it --rm \
-v /FQPN/certbot/conf:/etc/letsencrypt \
-v /FQPN/certbot/www:/var/www/certbot \
certbot/certbot certonly \
-m EMAILADDRESS \
--webroot \
--agree-tos \
--webroot-path=/var/www/certbot \
-d DOMAIN
With what you see above, I get valid certificates, but they are only readable by root.
I want this setup to retrieve new certificates when needed but if I manually change the ownership and mode on the folders/files which restrict this to root only, then those changes will be undone when new certificates are retrieved.
I want a solution so that the unprivileged nginx user can read those certificates and keys without without having to do manual work whenever new certificates are retrieved.
I checked if there were options in certbot which could be usefull. After doing certbot --help, I saw there exist a certbot -h all option which give you every single option for certbot.
In there I found a post-hook option which is only run when new certificates are successfully retrieved.
The solution was to change the following line in the docker-compose yaml file.
entrypoint: "/bin/sh -c \"trap exit TERM; while :; do certbot renew; #sleep 12h & wait $${!}; done;\""
I changed this to the following.
entrypoint: "/bin/sh -c \"trap exit TERM; while :; do certbot renew --post-hook 'chown root:NGINXUID /etc/letsencrypt/live /etc/letsencrypt/archive && chmod 750 /etc/letsencrypt/live /etc/letsencrypt/archive && chown root:NGINXUID /etc/letsencrypt/archive/DOMAIN/privkey*.pem && chmod 640 /etc/letsencrypt/archive/DOMAIN/privkey*.pem'; sleep 12h & wait $${!}; done;\""

docker-compose run multiple commands for a service

I am using docker on windows - version 18.03 (client)/18.05 (server). I have created docker-compose file for ELK stack. Everything is working fine. What I would like to do is, to install logtrail before kibana is started. I was thinking about copying logtrail*.zip first, then call install:
container_name: kibana
(...)
command:
- docker cp kibana:/ ./kibana/logtrail/logtrail-6.7.1-0.1.31.zip
- /bin/bash
- ./bin/kibana-plugin install/logtrail-6.7.1-0.1.31.zip
But that doesn't look like right way as first of all it doesn't work, second of all I am not sure if I can call mutliple commands like I did and third of all I'm not sure if docker cp in command is even allowed on that stage of service creation
command:
- /bin/bash
- -c
- |
echo "This is a multiline command"
echo "See how I escape $$ sign"
echo $$PATH
You can run multiple commands like above however you can not run docker cp as in your command.
You can run multiple commands for a service in docker compose by:
command: sh -c "command1 && command2 && command2"
THATS MY SOLUTION FOR THIS CASE:
# OPTION 01:
# command: >
# bash -c "chmod +x /scripts/rs-init.sh
# && sh /scripts/rs-init.sh"
# OPTION 02:
# entrypoint: [ "bash", "-c", "chmod +x /scripts/rs-init.sh && sh /scripts/rs-init.sh"]
If you're looking to install software David Maze's comment seems to be the standard path. If you want to actually run multiple commands look at the answer to this SO question Using Docker-Compose, how to execute multiple commands

Resources