Can't enable ssl by docker-letsencrypt-nginx-proxy-companion - docker

I want to enable ssl by docker-letsencrypt-nginx-proxy-companion.
This is the docker-compose.yml
version: "3.3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- certs:/etc/nginx/certs:ro
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
db:
# ---
wordpress:
# ---
environment:
# ---
VIRTUAL_HOST: blog.ironsand.net
LETSENCRYPT_HOST: blog.ironsand.net
LETSENCRYPT_EMAIL: mymail#example.com
restart: always
letsencrypt-nginx-proxy-companion:
container_name: letsencrypt
image: jrcs/letsencrypt-nginx-proxy-companion
volumes:
- certs:/etc/nginx/certs
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_PROXY_CONTAINER: nginx-proxy
restart: always
networks:
default:
external:
name: nginx-proxy
volumes:
certs:
vhostd:
html:
docker logs letsencrypt shows that a certificate exists already.
/etc/nginx/certs/blog.ironsand.net /app
Creating/renewal blog.ironsand.net certificates... (blog.ironsand.net)
2020-04-09 00:03:23,711:INFO:simp_le:1581: Certificates already exist and renewal is not necessary, exiting with status code 1.
/app
But ACME challenge returns nothing. (failure?)
$ docker exec letsencrypt bash -c 'echo "Hello world!" > /usr/share/nginx/html/.well-known/acme-challenge/hello-world'
$
The port 443 is listning, but the port is closed from outside.
// in remote server
$ sudo lsof -i:443
[sudo] password for ubuntu:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
docker-pr 10910 root 4u IPv6 633694 0t0 TCP *:https (LISTEN)
// from local pc
❯ nmap -p 443 blog.ironsand.net
Starting Nmap 7.80 ( https://nmap.org ) at 2020-04-09 09:44 JST
Nmap scan report for blog.ironsand.net (153.127.40.107)
Host is up (0.035s latency).
rDNS record for 153.127.40.107: ik1-418-41103.vs.sakura.ne.jp
PORT STATE SERVICE
443/tcp closed https
Nmap done: 1 IP address (1 host up) scanned in 0.21 seconds
I'm using packet filtering, but it's open for 80 and 443, and I'm not using firewall.
How can I investigate more where the problem exists?

I can't solve your problem directly, but I can wrote some hints, so can solve your problem.
Your command return nothing.
bash -c 'echo "Hello world!" > /usr/share/nginx/html/.well-known/acme-challenge/hello-world'
This comand only writes "Hello world!" to the location and normally return nothing. See https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html#Redirections
Look inside of certs-folder.
Have a look into the certs folder and maybe clean them up. Check that the folder was mounted corretly in your nginx-container. Take a bash into the container and check the ssl-folder.
Check that the firewall nothing breaks up
From outside is no connection possibly? What is from the inside? Login on your docker-host and check the connection from there (maybe openssl and curl are your friends),
Don't use SSL inside container.
I often see problems when sombody tries to use ssl with ACME-images and "wild mounting and shared volumes". But I heard never about problems, when the same people using a normal reverse proxy. I explain a good setup bellow.
So just remove the whole letscrypt-code from your container and close the 443 port of your container.
(Additionally you can switch to a non-root-image and expose only ports which doesn't need root-privileges.)
Then install nginx on your host and setup a reverse proxy (something like proxy_pass 127.0.0.1:8080). And now install certbot and start it. It helps you and is straight-forward.
The certbot can also maintain your certificates.

Related

Connection refused | Docker & ssh

I’m trying to connect to an ssh server from a docker container lifted by docker-compose.
On my localhost, I have enabled the port to use with "ufw allow 4222", I have placed port 4222 in my docker-compose.yml file.
I have also added the public key of my localhost to the container and to the authorized keys of the server, the problem is that it keeps failing me, someone knows that more I can check or take into account? thank you.
docker-compose.yml
version: '3.9'
services:
hermes:
depends_on:
- mongodb
build:
context: ./hermes-app/
container_name: hermes
tty: true
ports:
- "5000:5000"
- "4222"
environment:
- SLACK_CLIENT_ID=${SLACK_CLIENT_ID}
- SLACK_CLIENT_SECRET=${SLACK_CLIENT_SECRET}
- SLACK_SIGNING_SECRET=${SLACK_SIGNING_SECRET}
- SLACK_VERIFICATION_TOKEN=${SLACK_VERIFICATION_TOKEN}
- SSH_HOST=${SSH_HOST}
- SSH_USER=${SSH_USER}
- SSH_PORT=${SSH_PORT}
networks:
- netrmes
Error:
root#6d55aa960f46:/hermes-app# ssh root#my_server -p 4222
ssh: connect to host my_server port 4222: Connection refused
So you try to ssh from the container to your host system. The container knows nothing about my_server if it is not in netrmes network. You could use host network and then
ssh root#localhost -p 4222

My docker-compose not working properly when I try to connect mysql with nodejs [duplicate]

This question already has an answer here:
Docker mysql cant connect to container
(1 answer)
Closed 4 years ago.
here is my db.js file
let connection = mysql.createConnection({
host: process.env.DATABASE_HOST || '127.0.0.1',
user: 'root',
database: 'bc2k19',
password: 'joeydash',
port: 33060
});
here is my docker-compose.yml file
version: '3.2'
services:
app:
build: ./app
ports:
- "3000:3000"
depends_on:
- db
environment:
- DATABASE_HOST=db
db:
build: ./db
ports:
- "3306:3306"
here is my dockerfile for mysql
FROM mysql:latest
ENV MYSQL_ROOT_PASSWORD joeydash
ENV MYSQL_DATABASE bc2k19
ENV MYSQL_USER joeydash
ENV MYSQL_PASSWORD joeydash
ADD setup.sql /docker-entrypoint-initdb.d
here is my dockerfile for my app
# Use Node v4 as the base
image.
FROM node:latest
# Add everything in the current directory to our image, in the 'app' folder.
ADD . /app
# Install dependencies
RUN cd /app; \
npm install --production
# Expose our server port.
EXPOSE 3000
# Run our app.
CMD ["node", "/app/bin/www"]
I don't know why everytime I try to do docker-compose up and connect it shows
Error: connect ECONNREFUSED 127.0.0.1:3306
app_1 | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1117:14)
It shoes like the connection is not set but it is set
There are 3 things happening here:
Your port number in the nodejs app (db.js) has a typo.
change 33060 to 3306
Your user is root yet the user in the environment is joeydash
Change root to joeydash
Mysql might block your connection since it's set to listen to localhost per default. Within the container world localhost most of the times if not always points to within the container.
To fix point two you you should check your mysql config for the bind section. (look for bind-address) Make sure it allows connections from the ip your other container is running on.
If MySQL binds to 127.0.0.1, then
only software on the same computer
will be able to connect (because
127.0.0.1 is always the local computer).
If MySQL binds to
192.168.0.2 (and the server computer's IP address is
192.168.0.2 and it's on a /24 subnet), then any computers on the same
subnet (anything that starts with 192.168.0) will be able to connect.
If MySQL binds to
0.0.0.0, then any computer which is able to reach the server computer
over the network will be able to connect.
Quoting from here https://stackoverflow.com/a/3552946/2724940
If we go through every scenario:
The first one fails since all containers have their own localhost
The 2nd option could work if the correct ip is set to your mysql config.
The 3rd option works as mysql is now configured to allow all remote connections.
my.cnf (located in /etc/mysql/my.cnf
[mysqld]
bind-address = 0.0.0.0
You might need to create a network or link containers depending on your version of docker compose.
At the end of docker-compose.yml you could add:
networks:
backend:
driver: 'bridge'
And for each container add:
networks:
- backend

Cannot access virtual hosts between containers with docker-compose using nginx-proxy and dnsmasq

Context
I was planning on simplifying some development setup of multiple docker-compose.yml by introducing virtual hosts locally. I looked around and decided to use nginx-proxy for the reverse-proxy (ability to set VIRTUAL_HOST for each service).
Setup
To expose these on the host machine I went the route of dnsmasq and adding a /etc/resolver/test/ with nameserver 127.0.0.1.
I went and put the above into action using a dev/docker-compose.yml file:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: 'always'
ports:
- "80:80"
- "443:443"
volumes:
- "/var/run/docker.sock:/tmp/docker.sock:ro"
dnsmasq:
image: andyshinn/dnsmasq
restart: 'always'
ports:
- "53:53/tcp"
- "53:53/udp"
cap_add:
- NET_ADMIN
command: --log-facility=-
volumes:
- ./data/dnsmasq.conf:/etc/dnsmasq.conf
- ./data/dnsmasq.d:/etc/dnsmasq.d
networks:
default:
external:
name: proxynet
The data/dnsmasq.conf file only contains address=/test/127.0.0.1.
I've also created an external network proxynet and use that as the default network for the docker-compose file(s) (docker network create proxynet). This then allows other docker-compose files and services to be linked to the proxy.
I have the following proj1/docker-compose.yml:
version: "3.5"
services:
proj1-web:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=proj1-web.test
networks:
default:
external:
name: proxynet
Having both these of these docker-compose files running (i.e., docker-compose up) I am able to access proj1-web.test from my local machine. Everything works as expected.
Now I want to be able to reference proj1-web.test in another container and have it resolve to the running container.
I'll create proj2/docker-compose.yml (similar to previous just different name):
version: "3.5"
services:
proj2-web:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=proj2-web.test
networks:
default:
external:
name: proxynet
With everything running I can access both proj1-web.test and proj2-web.test from my local machine. I can successfully curl different services using between proj1 and proj2: docker-compose run proj1-web sh -c "apk update -qq; apk add curl -qq; curl -v proj2-web:8000".
Problem
The problem is that I cannot curl the virtual host's name proj2-web.test from proj1: docker-compose run proj1-web sh -c "apk update -qq; apk add curl -qq; curl -v proj2-web.test":
* Rebuilt URL to: proj2-web.test/
* Trying 127.0.0.1...
* TCP_NODELAY set
* connect to 127.0.0.1 port 80 failed: Connection refused
* Failed to connect to proj2-web.test port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to proj2-web.test port 80: Connection refused
Is there something I'm missing here? It appears the individual containers don't have access to the DNS being provided from dnsmasq to my local machine, I cannot figure out how to grant them that access. Maybe I'm going about this the wrong way -- I am open to suggestions.
I ended up creating a solution which addresses my question. You can see the repository here for the tool:
https://github.com/scoremedia/dcdc
I also created a blog post detailing a bit of this: https://kevinjalbert.com/docker-compose-dns-consistency-dcdc/
Hopefully this helps others.

Docker Compose: Expose not working

docker-ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
83b1503d2e7c app_nginx "nginx -g 'daemon ..." 2 hours ago Up 2 hours 0.0.0.0:80->80/tcp app_nginx_1
c9dd2231e554 app_web "/home/start.sh" 2 hours ago Up 2 hours 8000/tcp app_web_1
baad0fb1fabf app_gremlin "/start.sh" 2 hours ago Up 2 hours 8182/tcp app_gremlin_1
b663a5f026bc postgres:9.5.1 "docker-entrypoint..." 25 hours ago Up 2 hours 5432/tcp app_db_1
They all work fine:
app_nginx connects well with app_web
app_web connects well with postgres
No working file:
app_web is not able to connect with app_gremlin
docker-compose.yaml
version: '3'
services:
db:
image: postgres:9.5.12
web:
build: .
expose:
- "8000"
depends_on:
- gremlin
command: /home/start.sh
nginx:
build: ./nginx
links:
- web
ports:
- "80:80"
command: nginx -g 'daemon off;'
gremlin:
build: ./gremlin
expose:
- "8182"
command: /start.sh
Errors:
Basically I am not able to connect to gremlin container from my app_web container.
All below have been executed inside web_app container
curl:
root#49a8f08a7b82:/# curl 0.0.0.0:8182
curl: (7) Failed to connect to 0.0.0.0 port 8182: Connection refused
netstat
root#49a8f08a7b82:/# netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.11:42681 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN
udp 0 0 127.0.0.11:54232 0.0.0.0:*
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node Path
nmap
root#49a8f08a7b82:/# nmap -p 8182 0.0.0.0
Starting Nmap 7.60 ( https://nmap.org ) at 2018-06-22 09:28 UTC
Nmap scan report for 0.0.0.0
Host is up.
PORT STATE SERVICE
8182/tcp filtered vmware-fdm
Nmap done: 1 IP address (1 host up) scanned in 2.19 seconds
nslookup
root#88626de0c056:/# nslookup app_gremlin_1
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: app_gremlin_1
Address: 172.19.0.3
Experimenting:
For Gremlin container I did,
ports:
- "8182:8182"
Then from Host I can connect to gremlin container BUT still no connection between web and gremlin container
I am working on creating a re-creating sample Docker file (minimal stuff to recreate the issue) meanwhile anyone has any idea what the issue might be?
curl 0.0.0.0:8182
The 0.0.0.0 address is a wild card that tells an app to listen on all network interfaces, you do not connect to this interface as a client. For container to container communication, you need:
containers on the same user generated network (compose does this for you by default)
connect to the name of the service (or container name)
connect to the port inside the other container, not the published port.
In your case, the command should be:
curl http://gremlin:8182
Networking is namespaced in apps running inside containers, so each container gets it's open loopback interface and ip address on a bridge network. So moving an app into containers means you need to listen on 0.0.0.0 and connect to the bridge ip using DNS.
You should also remove links and depends_on from your Dockerfile, they don't apply in version 3. Links have long since been deprecated in favor of shared networks. And depends_on doesn't work in swarm mode along with probably not doing what you wanted since it never checked for the target app to be running, only the start of that container to have been kicked off.
One last note, expose doesn't affect the ability to communicate between containers on common networks or publish ports on the host. Expose simply sets meta data on the image that is documentation between the person creating the image and the person running the image. Applications are not required to use that value, but it's a good habit to make your app default to that value for the benefit of downstream users. Because of its role, unless you have another app checking for the exposed port list, like a self updating reverse proxy, there's no need to expose the port in the compose file unless you're giving the compose file to another person and they need the documentation.
There is no link configured in the docker-compose.yaml between web and gremlin. Try to use the following:
version: '3'
services:
db:
image: postgres:9.5.12
web:
links:
- gremlin
build: .
expose:
- "8000"
depends_on:
- gremlin
command: /home/start.sh
nginx:
build: ./nginx
links:
- web
ports:
- "80:80"
command: nginx -g 'daemon off;'
gremlin:
build: ./gremlin
expose:
- "8182"
command: /start.sh

Local hostnames for Docker containers

Beginner Docker question here,
So I have a development environment in which I'm running a modular app, it is working using Docker Compose to run 3 containers: server, client, database.
The docker-compose.yml looks like this:
#############################
# Server
#############################
server:
container_name: server
domainname: server.dev
hostname: server
build: ./server
working_dir: /app
ports:
- "3000:3000"
volumes:
- ./server:/app
links:
- database
#############################
# Client
#############################
client:
container_name: client
domainname: client.dev
hostname: client
image: php:5.6-apache
ports:
- "80:80"
volumes:
- ./client:/var/www/html
#############################
# Database
#############################
database:
container_name: database
domainname: database.dev
hostname: database
image: postgres:9.4
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=dbdev
- PG_TRUST_LOCALNET=true
ports:
- "5432:5432"
volumes:
- ./database/scripts:/docker-entrypoint-initdb.d # init scripts
You can see I'm assigning a .dev domainname to each one, this works fine to see one machine from another one (Docker internal network), for example here I'm pinging server.dev from client.dev's CLI:
root#client:/var/www/html# ping server.dev
PING server.dev (127.0.53.53): 56 data bytes
64 bytes from 127.0.53.53: icmp_seq=0 ttl=64 time=0.036 ms
This works great internally, but not on my host OS network.
For convenience, I would like to assigns domains in MY local network, not the Docker containers network so that I can for example type: client.dev on my browsers URL and load the Docker container.
Right now, I can only access if I use the Docker IP, which is dynamic:
client: 192.168.99.100:80
server: 192.168.99.100:3000
database: 192.168.99.100:5432
Is there an automated/convenient way to do this that doesn't involve me manually adding the IP to my /etc/hosts file ?
BTW I'm on OSX if that has any relevance.
Thanks!
Edit: I found this Github issue which seems to be related: https://github.com/docker/docker/issues/2335
As far as I understood, they seem to say that it is something that is not available outside of the box and they suggest external tools like:
https://github.com/jpetazzo/pipework
https://github.com/bnfinet/docker-dns
https://github.com/gliderlabs/resolvable
Is that correct? And if so, which one should I go for in my particular scenario?
OK,
so since it seems that there is no native way to do this with Docker, I finally opted for this alternate solution from Ryan Armstrong, which consists in dynamically updating the /etc/hosts file.
I chose this since it was convenient for me since this works as a script, and I already had a startup script, so I could just append this function in to it.
The following example creates a hosts entry named docker.local which
will resolve to your docker-machine IP:
update-docker-host(){
# clear existing docker.local entry from /etc/hosts
sudo sed -i '' '/[[:space:]]docker\.local$/d' /etc/hosts
# get ip of running machine
export DOCKER_IP="$(echo ${DOCKER_HOST} | grep -oE '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"
# update /etc/hosts with docker machine ip
[[ -n $DOCKER_IP ]] && sudo /bin/bash -c "echo \"${DOCKER_IP} docker.local\" >> /etc/hosts"
}
update-docker-host
This will automatically add or udpate the /etc/hosts line on my host OS when I start the Docker machine through my startup script.
Anyways, as I found out during my research, apart from editing the hosts file, you could also solve this problem by setting up a custom DNS server:
Also found several projects on Github which apparently aim to solve this problem, although I didn't try them:
https://github.com/jpetazzo/pipework
https://github.com/bnfinet/docker-dns
https://github.com/gliderlabs/resolvable
Extending on #eduwass's own answer, here's what I did manually (without a script).
As mentioned in the question, define the domainname: myapp.dev and hostname: www in the docker-compose.yml file
Bring up your Docker containers as normal
Run docker-compose exec client cat /etc/hosts to get an output of the container's hosts file (where client is your service name)
(Output example: 172.18.0.6 www.myapp.dev)
Open your local (host machine) /etc/hosts file and add that line: 172.18.0.6 server.server.dev
If your Docker service container changes IPs or does anything fancy you will want a more complex solution, but this is working for my simple needs at the moment.
Another solution would be to use a browser with a proxy extension sending the requests through a proxy container that will know where to resolve the domains to. If you consider using jwilder/nginx-proxy for production mode, then your issue can be easily solved with mitm-nginx-proxy-companion.
Here is an example based on your original stack:
version: '3.3'
services:
server:
build: ./server
working_dir: /app
volumes:
- ./server:/app
client:
environment:
- VIRTUAL_HOST: client.dev
image: php:5.6-apache
volumes:
- ./client:/var/www/html
database:
image: postgres:9.4
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=dbdev
- PG_TRUST_LOCALNET=true
volumes:
- ./database/scripts:/docker-entrypoint-initdb.d # init scripts
nginx-proxy:
image: jwilder/nginx-proxy
labels:
- "mitmproxy.proxyVirtualHosts=true"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-proxy-mitm:
dns:
- 127.0.0.1
image: artemkloko/mitm-nginx-proxy-companion
ports:
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
Run docker-compose up
Add a proxy extension to your browser, with proxy address being 127.0.0.1:8080
Access http://client.dev
The request will follow the route:
Access a local development domain in a browser
The proxy extension forwards that request to mitm-nginx-proxy-companion instead of the “real” internet
mitm-nginx-proxy-companion tries to resolve the domain name through the dns server in the same container
If the domain is not a “local” one, it will forward the request to the “real” internet
But if the domain is a “local” one, it will forward the request to the nginx-proxy
The nginx-proxy in its turn forwards the request to the appropriate container that includes the service we want to access
Side notes:
links removed as it's outdated and is replaced by Docker networks
you don't need to add domain names to server and database containers. client will be able to access them on server and database domains because they are all in the same network (similar to what link was doing previously)
you don't need to use ports on server and database containers because it only forwards ports to be used through 127.0.0.1. PHP in client container will do only "back-end" requests to other containers, and because those containers are in the same network, you already can access them with database:5432 and server:3000. The same goes for server <-> database connections.
I am the author of mitm-nginx-proxy-companion
In order to make whole domain for localhost you can use dnsmasq. In this case if you chose the domain .dev any subdomain will point to your container. But you have to know about problems with .dev zone
Or you can use bash script for launch your docker-compose which on start will add line to /etc/hosts and after you kill this process this line will removed
#!/usr/bin/env bash
sudo sed -i '1s;^;127.0.0.1 example.dev\n;' /etc/hosts
trap 'sudo sed -i "/example.dev/d" /etc/hosts' 2
docker-compose up
My Bash script WITH ALIAS without docker-machine
Based on http://cavaliercoder.com/blog/update-etc-hosts-for-docker-machine.html
#!/bin/bash
#alias
declare -A aliasArr
aliasArr[docker_name]="alias1,alias2"
# clear existing *.docker.local entries from /etc/hosts
sudo sed -i '/\.docker\.local$/d' /etc/hosts
# iterate over each machine
docker ps -a --format "{{.Names}}" \
| while read -r MACHINE; do
MACHINE_IP="$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ${MACHINE} 2>/dev/null)"
if [[ ${aliasArr[$MACHINE]} ]]
then
DOMAIN_NAME=$(echo ${aliasArr[$MACHINE]} | tr "," "\n")
else
DOMAIN_NAME=( ${MACHINE} )
fi
for addr in $DOMAIN_NAME
do
echo "add ${MACHINE_IP} ${addr}.docker.local"
[[ -n $MACHINE_IP ]] && sudo /bin/bash -c "echo \"${MACHINE_IP} ${addr}.docker.local\" >> /etc/hosts"
export no_proxy=$no_proxy,$MACHINE_IP
done
done

Resources