Question: How do you make web traffic run through certbot server and THEN to your app when port 80/443 can only be assigned to one server within Container Opimized OS?
Context:
Regular certbot install doesn't work for Google Cloud's "Container Optimzed OS" (which prevents write access, so no file can be executed). So I used a docker container of cerbot from letsencrypt, but it requires port 80/443 to be open, which my current web app is using.
Previously I would run certbot and then stop the server on my old instance and the certification would remain for 90 days. However, running the certbot docker container only gives SSL while it runs on port 80/443, but once stopped, SSL certificate is no longer valid.
Docker for letsencrypt: https://hub.docker.com/r/linuxserver/letsencrypt
Docker web app I want to host on port 80/443: https://hub.docker.com/r/lbjay/canvas-docker
Google Container Optimized Instance Info: https://cloud.google.com/container-optimized-os/docs/concepts/features-and-benefits
Here's a solution for using DNS validation for Certbot via Cloud DNS in the certbot/dns-google container image. It will use service account credentials to run the certbot-dns-google plugin in an executable container; this will configure the LetsEncrypt certs in a bind-mounted location on the host.
You'll first need to add a file to your instance with service account credentials for the DNS Administrator role - see the notes below for more context. In the example command below, the credentials file is dns-svc-account.json (placed in the working directory from which the command is called).
docker run --rm \
-v /etc/letsencrypt:/etc/letsencrypt:rw \
-v ${PWD}/dns-svc-acct.json:/var/dns-svc-acct.json \
certbot/dns-google certonly \
--dns-google \
--dns-google-credentials /var/dns-svc-acct.json \
--dns-google-propagation-seconds 90 \
--agree-tos -m team#site.com --non-interactive \
-d site.com
Some notes on the flags:
-v config-dir-mount
This mounts the configuration directory so that the files Certbot creates in the container propagate in the host's filesystem as well.
-v credentials-file-mount
This mounts the service account credentials from the host on the container.
--dns-google-credentials path-to-credentials
The container will use the mounted service account credentials for administering changes in Cloud DNS for validation with the ACME server (involves creating and removing a DNS TXT record).
--dns-google-propagation-seconds n | optional, default: 60
--agree-tos, -m email, --non-interactive | optional
These can be helpful for running the container non-interactively; they're particularly useful when user interaction might not be possible (e.g. continuous delivery).
Certbot command-line reference
Related
I have a ticketing system (OTOBO/OTRS) that runs in docker on Ubuntu 20.04 and uses a nginx reverse proxy for HTTPS. We only renew our certificates for 1 year at a time and its now time to update the existing SSL cert. How do I go about updating the certificate thats currently being used. The current certs exist in a Docker volume and are located in /etc/nginx/ssl/.
I have tried just copying the certificates into the Nginx proxy container replacing the existing ones. After a reboot, the site was no longer reachable. Below is the example of commands I ran.
sudo docker cp cert.crt container_id:/etc/nginx/ssl/
Sudo docker cp cert.key container_id:/etc/nginx/ssl/
sudo docker exec otobo_nginx_ssl nginx -s reload
Does the above look correct or am I missing a step? I hardly ever have to use docker and am very green to it.
I have a Nginx Proxy Manager container, which proxies docker containers as well as some physical devices within host external network.
For NPM to get access to them, I've created a network:
sudo docker network create -d macvlan \
--subnet=192.168.0.0/23 \
--gateway=192.168.0.1 \
-o parent=enp2s0 \
npm
and added NPM to it with:
sudo docker network connect --ip 192.168.0.12 npm npm_nginxproxymanager_1
The issue with this is that after rebooting the host machine, the IP is not persistent.
NPM is still within that network, but the IP it gets for some reason is automatically assigned, and becomes 192.168.0.1. How can I make the container IP stay 0.12 after reboot?
As I discussed before, you are already using the --ip network setting to set the IP.
To keep it persistent across session, you would need to add that docker network connect directive in a .bashrc or .profile setting file, to be executed when you log in.
Or set it up as a service, like chung1905/docker-network-connector does.
For context - I am attempting to deploy OKD in an air-gapped environment, which requires mirroring an image registry. This private, secured registry is then pulled from by other machines in the network during the installation process.
To describe the environment - the host machine where the registry container is running is running Centos 7.6. The other machines are all VMs running Fedora coreOS in using libvirt. The VMs and the host are connected using a virtual network created using libvirt which includes DHCP settings (dnsmasq) for the VMs to give them static IPs. The host machine also hosts the DNS server, which, as far as I can tell is configured properly, as I can ping every machine from every other machine using its fully qualified domain name and access specific ports (such as the port the apache server listens on). Podman is used instead of Docker for container management for OKD, but as far as I can tell the commands are exactly the same.
I have the registry running in the air-gapped environment using the following command:
sudo podman run --name mirror-registry -p 5000:5000 -v /opt/registry/data:/var/lib/registry:z \
-v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z -e REGISTRY_AUTH=htpasswd \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.pem -e REGISTRY_HTTP_TLS_KEY=/certs/registry-key.pem \
-d docker.io/library/registry:latest
It is accessible using curl -u username:password https://host-machine.example.local:5000/v2/_catalog which returns {"repositories":[]}. I believe this confirms that my TLS and authorization configurations are correct. However, if I transfer the ca.pem file (used to sign the SSL certificates the registry uses) over to one of the VM's on the virtual network, and attempt to use the same curl command, I get an error:
connect to 192.168.x.x port 5000 failed: Connection refused
Failed to connect to host-machine.example.local port 5000: Connection refused
Closing connection 0
This is quite strange to me, as I've been able to use this method to communicate with the registry from the VMs in the past, and I'm not sure what has changed.
After some further digging, it seems like there is some sort of issue with the port itself, but I can't be sure where the issue is stemming from. For example, If I run sudo netstat -tulpn | grep LISTEN on the host, I receive a line indicating that podman (conmon) is listening on the correct port:
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 48337/conmon
but if I test whether the port is accessible from the VM, (nc -zvw5 192.168.x.x 5000) I get a similar error: Ncat: Connection refused. If I use the same test on any of the other listening ports on the host, it indicates successful connections to those ports.
Please note, I have completely disabled firewalld, so as far as I know, all ports are open.
I'm not sure if the issue is with my DNS settings, or the virtual network, or with the registry itself and I'm not quite sure how to further diagnose the issue. Any insight would be much appreciated.
I have used this image https://hub.docker.com/r/bibinwilson/jenkins-slave/
and created container with below comments
docker run -d -p 80:80 bibinwilson/jenkins-slave
If your host needs to allow connections from a jenkins instance hosted on a different machine, you will need to open up the TCP port. This can be achieved by editing the docker config file and setting (for example)
DOCKER_OPTS="-H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock"
The docker configuration file location will depend your system, but it is likely to be /etc/init/docker.conf, /etc/default/docker or /etc/default/docker.io)
I have created Docker host authentication with docker certificates ,now its working fine
From the document in the following link: https://github.com/hyperledger/fabric/blob/master/docs/dev-setup/install.md
we've got to know that we should do
Make sure that the Docker daemon initialization includes the options
-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
However, should we initialize the docker every time we restart the blockchain server?
In addition, I conduct the following command:
nohup docker daemon -g /data/docker -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock&
What does -g /data/docker mean?
The options you are passing to docker do the following:
-g /data/docker: this changes the runtime directory from /var/lib/docker to the one you've provided
-H tcp://0.0.0.0:2375: this tells docker to listen on all network interfaces to port 2375, unencrypted. Caution: this allows anyone with network access to your machine to have full root access, a firewall or isolated machine is required for security.
-H unit:///var/run/docker.sock: this tells docker to process commands from any user with access to this socket, typically restricted to root and members of the "docker" group.
I'm not familiar with the blockchain install, but typically a docker engine can restart the containers contained within, so it should not be required to restart, only start it on boot as a service.