I have a ticketing system (OTOBO/OTRS) that runs in docker on Ubuntu 20.04 and uses a nginx reverse proxy for HTTPS. We only renew our certificates for 1 year at a time and its now time to update the existing SSL cert. How do I go about updating the certificate thats currently being used. The current certs exist in a Docker volume and are located in /etc/nginx/ssl/.
I have tried just copying the certificates into the Nginx proxy container replacing the existing ones. After a reboot, the site was no longer reachable. Below is the example of commands I ran.
sudo docker cp cert.crt container_id:/etc/nginx/ssl/
Sudo docker cp cert.key container_id:/etc/nginx/ssl/
sudo docker exec otobo_nginx_ssl nginx -s reload
Does the above look correct or am I missing a step? I hardly ever have to use docker and am very green to it.
Related
I am running GitLab in a Docker container and wanted to configure https following these instructions. To do this, I invoked the terminal with the following command in Powershell:
docker exec -ti -u root b836c4cdfd37 bash
After entering the command sudo ufw allow https, the following error message is displayed:
WARN: initcaps
[Errno 2] iptables v1.8.4 (legacy): can't initialize iptables table `filter': Permission denied (you must be root)
Maybe iptables or your kernel needs to be upgraded.
Skipping adding existing rule
Skipping adding existing rule (v6)
How can I execute the sudo ufw allow https command without errors?
The instructions you are following are for installing GitLab directly on an Ubuntu host, not running GitLab inside of a docker container, which requires a different set of installation and networking steps. In your scenario, ufw will not work as described in the guide you're following because docker manages the networking for your container by default. Docker's networking can interfere with trying to manage your firewall configuration with ufw or iptables. Even if you manage to get the command to work, you'll find that docker's network management can bypass your ufw configurations in your container anyhow.
To install GitLab in docker, you should follow the official docker installation instructions. You can also review all the other installation methods for GitLab for additional context.
If you really want to continue installing gitlab "manually" inside of a container, just skip the UFW steps and make sure you have configured port mapping for the GitLab container from the docker host (e.g. to map http/https, docker run -p 80:80 -p 443:443 ...).
Question: How do you make web traffic run through certbot server and THEN to your app when port 80/443 can only be assigned to one server within Container Opimized OS?
Context:
Regular certbot install doesn't work for Google Cloud's "Container Optimzed OS" (which prevents write access, so no file can be executed). So I used a docker container of cerbot from letsencrypt, but it requires port 80/443 to be open, which my current web app is using.
Previously I would run certbot and then stop the server on my old instance and the certification would remain for 90 days. However, running the certbot docker container only gives SSL while it runs on port 80/443, but once stopped, SSL certificate is no longer valid.
Docker for letsencrypt: https://hub.docker.com/r/linuxserver/letsencrypt
Docker web app I want to host on port 80/443: https://hub.docker.com/r/lbjay/canvas-docker
Google Container Optimized Instance Info: https://cloud.google.com/container-optimized-os/docs/concepts/features-and-benefits
Here's a solution for using DNS validation for Certbot via Cloud DNS in the certbot/dns-google container image. It will use service account credentials to run the certbot-dns-google plugin in an executable container; this will configure the LetsEncrypt certs in a bind-mounted location on the host.
You'll first need to add a file to your instance with service account credentials for the DNS Administrator role - see the notes below for more context. In the example command below, the credentials file is dns-svc-account.json (placed in the working directory from which the command is called).
docker run --rm \
-v /etc/letsencrypt:/etc/letsencrypt:rw \
-v ${PWD}/dns-svc-acct.json:/var/dns-svc-acct.json \
certbot/dns-google certonly \
--dns-google \
--dns-google-credentials /var/dns-svc-acct.json \
--dns-google-propagation-seconds 90 \
--agree-tos -m team#site.com --non-interactive \
-d site.com
Some notes on the flags:
-v config-dir-mount
This mounts the configuration directory so that the files Certbot creates in the container propagate in the host's filesystem as well.
-v credentials-file-mount
This mounts the service account credentials from the host on the container.
--dns-google-credentials path-to-credentials
The container will use the mounted service account credentials for administering changes in Cloud DNS for validation with the ACME server (involves creating and removing a DNS TXT record).
--dns-google-propagation-seconds n | optional, default: 60
--agree-tos, -m email, --non-interactive | optional
These can be helpful for running the container non-interactively; they're particularly useful when user interaction might not be possible (e.g. continuous delivery).
Certbot command-line reference
Is it possible to configure the dns for the concourse build container.
I know there is a build_args: argument with the docker-image-resource but I am unable get it to replicate the following docker build parameter--dns=IP_ADDRESS...
Has anyone done something similar in their pipeline.yml?
It's unlikely you will be able to set this via Concourse due to lack of support in Docker.
The --dns=IP_ADDRESS option you reference is a docker run argument.
The docker build command doesn't allow you to change the DNS settings for the build containers that run under it.
This recent github issue links to a bunch of the related issues:
#1916 (comment)
#2267
#3851
#5779
#7966
#10171
#10324
#24928
Workarounds
Set Container DNS for a RUN step
You can modify the local /etc/resolv.conf during a build step in a Dockerfile:
FROM busybox:latest
RUN set -uex; \
echo "nameserver 8.8.8.8" > /etc/resolv.conf; \
cat /etc/resolv.conf; \
ping -c 4 google.com
RUN cat /etc/resolv.conf
It will be back to normal for the next run step though.
Set Daemon DNS
You can configure a Docker daemon with a custom DNS server for all containers that don't override the dns.
dockerd --dns 8.8.8.8
It's possible to run a specific "Build" instance of Docker with custom DNS if you needed the builds to be different to what you run your containers run with.
Set Host DNS
Edit /etc/resolv.conf on the host to point at your DNS. This obviously effects everything running on the host.
It's possible to run a local caching server that can be configured to forward your required requests to a local DNS server and forward anything else to your normal DNS servers (similar to what Docker does locally for a container DNS).
Does Docker Swarm support usage of Docker Registry with self-signed certificate?
I've created my cluster based on step from official Docker documentation, it uses swarm master/nodes running inside containers.
It works well, but as soon as I try to login to my Docker Registry I'm getting error message:
$ docker -H :4000 login https://...:443
...
Error response from daemon: Get https://.../v1/users/: x509: certificate signed by unknown authority
Is there an additional option which needs to be set, like --insecure-registry? Or do I need to somehow update Docker Swarm container?
You need to add your self signed cert or personal CA to the list of trusted certificates on the host. For some reason, docker doesn't use the certificates on the daemon for this authentication. Here are the commands for a debian host:
sudo mkdir -p /usr/local/share/ca-certificates
sudo cp ca.pem /usr/local/share/ca-certificates/ca-local.crt
sudo update-ca-certificates
sudo systemctl restart docker
The docker restart at the end is required for the daemon to reload the OS certificates.
As luka5z saw in the latest documentation, you can also add the certs directly to each docker engine by copying the cert to /etc/docker/certs.d/myregistrydomain.com:5000/ca.crt. This avoids trusting the self signed CA on the entire OS.
is there a way I could update it with required certificates?
Docker 17.06 will bring the command docker swarm ca (PR 48).
Meaning a docker swarm ca --rotate will be enough.
root#ubuntu:~# docker swarm ca --help
Usage: docker swarm ca [OPTIONS]
Manage root CA
Options:
--ca-cert pem-file Path to the PEM-formatted root CA certificate to use for the new cluster
--ca-key pem-file Path to the PEM-formatted root CA key to use for the new cluster
--cert-expiry duration Validity period for node certificates (ns|us|ms|s|m|h) (default 2160h0m0s)
-d, --detach Exit immediately instead of waiting for the root rotation to converge
--external-ca external-ca Specifications of one or more certificate signing endpoints
--help Print usage
-q, --quiet Suppress progress output
--rotate Rotate the swarm CA - if no certificate or key are provided, new ones will be generated
Here is a demo.
I also encountered your problem.
I was not able to identify the root cause of this, or what sets this limitation.
But i managed a workaround:
if it is insecure make sure you start each docker daemon accordingly on each host.
you can find info on how to change daemon options: https://docs.docker.com/engine/admin/systemd/
eg: from my conf. --insecure-registry <private registry> after that:
systemctl daemon-reload
systemctl restart docker
docker login <private registry>
on each docker host and pull the needed images.
after that you have all the images and it will not try to pull them anymore.
i know this is not the best solution :(
PS: I also had to add these parameters to each docker daemon:
--cluster-advertise=<host:ip> --cluster-store=consul://<consul ip:consul port>
without these i could not run containers on different hosts. They were all running on one host randomly chosen.
I've started Jenkins in a Docker container by mounting the Docker sockets. So now I'm able to perform docker commands on my Jenkins. But the specific folders of Docker aren't in my container. (Just mounted the sockets).
Now I need to use certs to access my Docker registry. The path of the certs needs to be: /etc/docker/certs.d/myregistry.com:5000/ca.crt
But this does not exist in my Jenkins which just contains the bin and run folders of Docker.
What's the best way to connect the certificates for my Jenkins?
The way I'm doing it (for my SSL web server, but I think the principle is the same) is simply mounting the cert directory with -v.
E.g.:
docker run -v /etc/pki:/etc/pki:ro -P 443:443 mycontainer
Seems to work quite nicely (although it helps loads if you can wildcard the hostname, so your container doesn't need to "know" which host it's running on)