Docker services only available to the local host - docker

Simple use case: a production server running Nginx (listening on 0.0.0.0:80 and 443), responsible for reading the SSL certificates and, if valid, redirect to a "hidden" service in a Docker container. This service runs Gunicorn (so a Django website) and is listening on port 8000. That sounds standard, simple, tested, almost too beautiful to be true... but of course, it's not working as I'd like.
Because Gunicorn, running in its little Docker container, is accessible through the Internet. If you go to my hostname, port 8000, you get the Gunicorn website. Obviously, it's ugly, but the worst is it completely bypasses Nginx and the SSL certificates. So why would a Docker container be accessible through the Internet? I know that for some time it was the other way around with Docker. We need a proper balance!
On further inspecting the problem: I do have a firewall and it's configured to be extremely limiting. It only allows port 22 (for ssh, waiting to be remapped), 80 and 443. So 8000 should absolutely not be allowed. But ufw uses iptables, and Docker adds some iptables rules to bypass configuration if a container runs on that port.
I tried a lot of stupid things (that's part of the job): in my docker-compose.yml file specifying the ports to bind, I tried to remove them ('course if I do, nginx can't access my hidden service). I tried to add an IP specified to bind (it seems it's allowed):
ports:
- "127.0.0.1:8000:8000"
This had a weird result, as Nginx wasn't able to connect but Gunicorn was still visible through the Internet. So I would say, exactly the contrary of what I want. I tried to manually change the Docker service to add flags (not good) and tried to add a configuration file in etc/docker/daemon.json (changing the "ip" setting to "127.0.0.1" again). I'm short out of ideas. If anyone has a pointer... I wouldn't think this is an extremely rare usage of Docker, after all.
Specifics: I don't run containers with docker run directly. I have a docker-compose.yml file and run services using docker stack, so in a swarm (although I only have one machine at the time). Could be related, though again, I would think not.
System: Debian 9.
Docker version: 18.09.
docker-compose version: 1.22
nginx (/etc/nginx/sites-available/example.com)
upstream gunicorn {
server localhost:8000;
}
server {
server_name example.com www.example.com;
location / {
proxy_pass http://gunicorn;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate ...;
ssl_certificate_key ...;
include ...;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 404; # managed by Certbot
}
docker-compose.yml
version: '3.7'
services:
gunicorn:
image: gunicorn:latest
command: gunicorn mysite.wsgi:application --bind 0.0.0.0:8000
volumes:
- ./:/usr/src/app/
ports:
- "8000:8000"
depends_on:
- db
networks:
- webnet
db:
image: postgres:10.5-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
networks:
- webnet
networks:
webnet:
volumes:
postgres_data:
Note: the gunicorn image has been built beforehand, but there's no trick, just a Python-3.7:slim image with all set for gunicorn and a Django website under mysite/. It doesn't expose any port (not that I think makes any difference here).

Okay, after some digging, here's what I found: docker will make sure to create iptables rules to access containers from outside of the network. Asking Docker to not bother about iptables at all is not a good strategy, as it will need it to forward connections from the containers to the outside world. So the documentation recommended to create an iptables rule in the docker-user chain to prevent external access to the Docker daemon. That's what I did. Of course, the given command (slightly modified to completely forbid external access) didn't persist, so I had to create a service just to add this rule. Probably not the best option, so don't hesitate to comment if you have a better choice to offer. Here's the iptables rule I added:
iptables -I DOCKER-USER -i ext_if ! -s 127.0.0.1 -j DROP
This rule forbids external access to the Docker daemon, but still allows to connect to individual containers. This rule didn't seem to solve anything for me, since this solution wasn't exactly an answer to my question. To forbid access to my Docker container, running on port 8000, from the Internet, I added yet another rule in the same script:
iptables -I DOCKER-USER -p tcp --destination-port 8000 -j DROP
This rule is a bit extreme: it completely forbids network traffic on port 8000 over the Docker network. The host (localhost) is excluded from this rule, since a previous rule should allow local traffic no matter what (if you do your iptables config by hand, you will have to add this rule, it's not a given, a reason why I switched to simpler solutions like ufw). If you look at the firewall rules with iptables -L, you will see that the chain DOCKER-USER is called before any Docker rule. You might find another rule allowing traffic on the same port. However, because we forbid traffic in a higher-priority rule, port 8000 will be effectively hidden from outside.
This solution, while seeming to solve the problem, is not exactly elegant and intuitive. Again, I can't help but wonder if I was the first ever to use Docker to host my "hidden" services while maintaining a different service in front. I guess the usual approach is to have nginx in a docker container itself, but that created other issues for me that I frankly decided outdid the advantages.

Related

Hugo theme link refers to container port in Docker/Nginx

I've got a simple static site, generated with Hugo, that I'm building to a Docker container running Nginx. Nginx is listening on port 90. I'm encountering strange behavior where certain links try to open the container port rather than the host port (in the case of localhost, it's 8000). So for example, this link:
Docs
...when moused-over shows that it will attempt to open localhost:8000/documents, which is correct, but when clicked it attempts instead to open http://localhost:90/documents/ (If I manually change the URL in the browser to http://localhost:8000/documents/, it responds fine.)
What makes this even stranger:
Only certain links, specifically in the header menu, do this.
I've used dozens of Hugo themes, and I've only encountered this issue with one of them: ZDoc. Could it be specific to this theme? That seems strange to me.
What could be causing this? I'm struggling to even know what this phenomenon is called. "Host/container port confusion"?
I'm certain it's not a misconfiguration of Nginx or Docker. I'm exposing port 90 properly in my Dockerfile:
EXPOSE 90
nginx.conf is set to listen on that port:
http {
include /etc/nginx/mime.types;
sendfile on;
server {
root /usr/share/nginx/html/;
index index.html;
server_name localhost;
listen 90;
}
}
And I'm starting the Docker container with the host port 8000 forwarding to the port Nginx is listening on:
docker run --name my-simple-site -p 8000:90 -d simple-site
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de9cd1526034 simple-site "nginx -g 'daemon of…" 41 minutes ago Up 41 minutes 0.0.0.0:8000->90/tcp my-simple-site
Strangely, the fix for this was to change the link to point directly to the file: Docs
I'm unclear why and would love some insight into this. Does Nginx infer a port when pointing to a directory?

docker nginx proxy nginx connect() failed (111: Connection refused) while connecting to upstream

I'm trying to run a nginx container as the main entry point for all of my websites and web services. I managed to run a portainer as a container, and I'm able to reach it from the internet. Right now I'm trying to reach a static website hosted by another nginx container, but I fail doing so - when I go to the URL, I get
502 Bad Gateway
I've tried adding the upstream section to my main nginx's config, but nothing changed (after every config change, I reload my main nginx service inside the container).
On the other hand, adding upstream is something I'd like to avoid if it's possible because spawning multiple different applications would require adding an upstream for each application - and that's much more work than I'd expect.
Here is my main nginx's configuration file:
events {
worker_connections 1024;
}
http {
server {
listen 80;
location /portainer/ {
proxy_pass http://portainer:9000/;
}
location /helicon/ {
proxy_pass http://helicon:8001/;
}
}
}
Here is how I start my main nginx container:
docker run -p 80:80 --name nginx -v /var/nginx/conf:/etc/nginx:ro --net=internal-net -d nginx
Here is my static website's nginx configuration file:
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name helicon;
root /var/www/html/helicon;
error_log /var/log/nginx/localhost.error.log;
access_log /var/log/nginx/localhost.access.log;
}
}
Here is how the docker-compose file to create and start that container:
version: '3.5'
services:
helicon:
build: .
image: helicon
ports:
- "127.0.0.1:8001:80"
container_name: helicon
networks:
- internal-net
networks:
internal-net:
external: true
I'm using internal-net network to keep all apps in the same network instead of deprecated --link option for docker run
When I go to http://my.server.ip.address/helicon I get 502. Then I check logs with docker logs nginx and there is an information
2018/06/24 11:15:28 [error] 848#848: *466 connect() failed (111: Connection refused) while connecting to upstream, client: Y.Y.Y.Y, server: , request: "GET /helicon/ HTTP/1.1", upstream: "http://172.18.0.2:8001/", host: "X.X.X.X"
The helicon container indeed has an IP address of 172.18.0.2.
What am I missing? Maybe my approach should be completely different from using networks?
Kind regards,
Daniel
To anyone coming across this page here is my little contribution to your understanding of docker networking.
I would like to illustrate with an example scenario.
We are running several contianers with docker-compose such as the following:
Docker container client
Docker container nginx reverse proxy
Docker container service1
Docker container service2
etc ...
to make sure you are setup correctly, make sure of the following:
All containers are on same network!
first run: "docker network ls" to find your network name for your stack
secondly run: "docker network inspect [your_stack_network_name]"
note that the ports you expose in docker-compose have nothing to do with nginx reverse proxying!
that means that any ports you exposed in your docker-compose file are available on your actual host machine i.e your latptop or pc via your browser BUT for proxying purposes you must point nginx to actual ports of your services.
A walkthrough:
lets say service1 runs inside container 1 on port 3000, you mapped port 8080 in your docker-compose file like so: "8080:3000", with this configuration on your local machine you can access the container via your browser on port 8080 (localhost:8080) BUT for nginx reverse proxy container, when trying to proxy to service1, port 8080 is not relevant! the dockerized nginx reverse-proxy will use docker DNS to map service1 to its ip within then docker network and look at it entirely independently from your local host.
to nginx reverse proxy inside docker network, service1 only listens on port 3000!!!
so make sure to point nginx to the correct port!
Solved. I was working on this for hours thinking it was a nginx config issue. I modified nginx.conf endlessly but couldn't fix it. I was getting 502 Bad Gateway and the error description was:
failed (111: Connection refused) while connecting to upstream
I was looking in the wrong place. It turns out that the http server in my index.js file was listening on the url 'localhost'.
httpServer.listen(PORT, 'localhost', async err => {
This works fine on your development machine, but when running inside a container it must be named the url of the container itself. My containers are networked, and in my case the container is named 'backend'
I changed the url from 'localhost' to 'backend' and everything works fine.
httpServer.listen(PORT, 'backend', async err => {
I was getting the same error. In docker-compose.yml my service port was mapped to a different port (e.g. 1234:8080) and I was using the mapped port number (1234) inside nginx.conf.
However, inside the docker network, the containers do not use their mapped port numbers. In order to solve this problem, I changed the proxy_pass statement to use the correct port number (8080).
To make it clear the working configuration is like this: (Check the used port number in nginx.conf!)
docker-compose.yml
version: '3.8'
services:
...
web1:
ports:
- 1234:8080
networks:
- net1
...
proxy:
image: nginx:latest
networks:
- net1
...
networks:
net1:
driver: bridge
nginx.conf
...
location /api
...
proxy_pass http://web1:8080/;
I must thank user8458126 for pointing me in the right direction.
For me, I had overwrote my default.conf nginx file and mistyped the destination for it in my docker file which told nginx not to be listening on the correct port, and instead defaulting to port 80.
Long story short, make sure you're overwriting to the correct path.
What I had:
COPY ./default.conf ./etc/nginx/default.conf
Correct:
COPY ./default.conf ./etc/nginx/conf.d/default.conf
Hope this saves someone a few hours of racking their brain.

Nginx reverse proxy to an app in host

I have an app that is running outside Docker on port 5000. I am trying to run a reverse proxy in nginx via Docker compose but am unable to communicate with the host's port 5000. In my docker-compose.yml file I have:
ports:
- 80:80
- 443:443
- 5000:5000
When I try to run this I get:
ERROR: for nginx Cannot start service nginx: driver failed programming external connectivity on endpoint nginx (374026a0d34c8b6b789dcd82d6aee6c4684b3201258cfbd3fb18623c4101): Error starting userland proxy: listen tcp 0.0.0.0:5000: bind: address already in use
If I comment out - 5000:5000 I get:
[error] 6#6: *1 connect() failed (111: Connection refused) while connecting to upstream
How do I connect to an already running app in the Host from a Docker nginx container?
EDIT:
My nginx.conf file
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
upstream mysite {
server 0.0.0.0:5000;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://mysite;
}
}
}
The response when I try to curl localhost is 502 Bad Gateway. The app itself and curl 127.0.0.1:5000 responds fine from the host.
EDIT 2:
I have also tried the solution found here but I get nginx: [emerg] host not found in upstream "docker". Docker is my host's hostname.
EDIT 3:
My docker-compose.yml
version: '3'
services:
simple:
build: ./simple
container_name: simple
ports:
- 80:80
- 443:443
My Dockerfile:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;", "-c", "/etc/nginx/nginx.conf"]
EDIT:
I am getting the computer host via the "hostname" command in linux.
The problem lies with 0.0.0.0:5000. Since Nginx is running inside the docker, it tries to find this address inside docker machine but fails since there is nothing running on 0.0.0.0:5000 inside docker.
So in order to resolve this
You need to give it an address that it can reach. Solving it requires that you
first run your application at 0.0.0.0:5000 on your host machine i.e you should be able to open your application at 0.0.0.0:5000 from your browser.
Find your IP address. once you get your IP address you should be able to
open you application through ip_address:5000. since your docker and host share the same network this address can be reached from docker also
Now, replace the 0.0.0.0:5000 in your Nginx conf file with this ip_address:5000. you would be able to serve your application
172.17.0.1 is the default host ip available to docker container running on host.
Just use 172.17.0.1:5000 in your nginx conf file and you should be able to connect to your application running on host outside the container.
My docker version is 19.03.12 where I tested the same.
I need to use a different variable to access the host container: http://host.docker.internal.
Note: I'm running on a Windows host. Not sure if that matters.

How to assign domain names to containers in Docker?

I am reading a lot these days about how to setup and run a docker stack. But one of the things I am always missing out on is how to setup that particular containers respond to access through their domain name and not just their container name using docker dns.
What I mean is, that say I have a microservice which is accessible externally, for example: users.mycompany.com, it will go through to the microservice container which is handling the users api
Then when I try to access the customer-list.mycompany.com, it will go through to the microservice container which is handling the customer lists
Of course, using docker dns I can access them and link them into a docker network, but this only really works for wanting to access container to container, but not internet to container.
Does anybody know how I should do that? Or the best way to set that up.
So, you need to use the concept of port publishing, so that a port from your container is accessible via a port from your host. Using this, you can can setup a simple proxy_pass from an Nginx that will redirect users.mycompany.com to myhost:1337 (assuming that you published your port to 1337)
So, if you want to do this, you'll need to setup your container to expose a certain port using:
docker run -d -p 5000:5000 training/webapp # publish image port 5000 to host port 5000
You can then from your host curl your localhost:5000 to access the container.
curl -X GET localhost:5000
If you want to setup a domain name in front, you'll need to have a webserver instance that allows you to proxy_pass your hostname to your container.
i.e. in Nginx:
server {
listen 80;
server_name users.mycompany.com;
location / {
proxy_pass http://localhost:5000;
}
}
I would advise you to follow this tutorial, and maybe check the docker run reference.
For all I know, Docker doesn't provide this feature out of the box. But surely there are several workarounds here. In fact you need to deploy a DNS on your host that will distinguish the containers and resolve their domain names in dynamical IPs. So you could give a try to:
Deploy some of Docker-aware DNS solutions (I suggest you to use SkyDNSv1 / SkyDock);
Configure your host to work with this DNS (by default SkyDNS makes the containers know each other by name, but the host is not aware of it);
Run your containers with explicit --hostname (you will probably use scheme container_name.image_name.dev.skydns.local)
You can skip step #2 and run your browser inside container too. It will discover the web application container by hostname.
Here is a one solution with the nginx and docker-compose:
users.mycompany.com is in nginx container on port 8097
customer-list.mycompany.com is in nginx container on port 8098
Nginx configuration:
server {
listen 0.0.0.0:8097;
root /root/for/users.mycompany.com
...
}
server {
listen 0.0.0.0:8098;
root /root/for/customer-list.mycompany.com
...
}
server {
listen 0.0.0.0:80;
server_name users.mycompany.com;
location / {
proxy_pass http://0.0.0.0:8097;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
server {
listen 0.0.0.0:80;
server_name customer-list.mycompany.com;
location / {
proxy_pass http://0.0.0.0:8098;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Docker compose configuration :
services:
nginx:
container_name: MY_nginx
build:
context: .docker/nginx
ports:
- '80:80'
...

How to dockerize two applications talking to each other via a http server?

TL;DR
How can we setup a docker-compose environment so we can reach a container under multiple, custom defined aliases? (Or any alternative that solves our problem in an other fashion.)
Existing setup
We have two applications† (nodejs servers), each behind an HTTP reverse proxy (Nginx), that need to talk to each other. On localhost, configuring this is easy:
Add /etc/hosts entries for ServerA and ServerB:
127.0.0.1 server-a.testing
127.0.0.1 server-b.testing
Run ServerA on port e.g. 2001 and ServerB on port 2002
Configure two virtual hosts, reverse proxying to ServerA and ServerB:
server { # Forward all traffic for server-a.testing to localhost:2001
listen 80;
server_name server-a.testing;
location / {
proxy_pass http://localhost:2001;
}
}
server { # Forward all traffic for server-b.testing to localhost:2002
listen 80;
server_name server-b.testing;
location / {
proxy_pass http://localhost:2002;
}
}
This setup is great for testing: Both applications can communicate each other in a way that is very close to the production environment, e.g. request('https://server-b.testing', fn); and we can test how the HTTP server configuration interacts with our apps (e.g. TLS config, CORS headers, HTTP2 proxying).
Dockerize all the things!
We now want to move this setup to docker and docker-compose. The docker-compose.yaml that would work in theory is this:
nginx:
build: nginx
ports:
- "80:80"
links:
- server-a
- server-b
server-a:
build: serverA
ports:
- "2001:2001"
links:
- nginx:server-b.testing
server-b:
build: serverB
ports:
- "2002:2002"
links:
- nginx:server-a.testing
So when ServerA addresses http://server-b.testing it actually reaches the Nginx which reverse proxies it to ServerB. Unfortunately, circular dependencies are not possible with links. There are three typical solutions to this problems:
use ambassadors
use nameservers
use the brand new networking (--x-networking).
Neither of these work for us, because, for the virtual hosting to work, we need to be able to address the Nginx container under the name server-a.testing and server-b.testing. What can we do?
(†) Actually it's a little bit more complicated – four components and links – but that shouldn't make any difference to the solution:
testClient (-> Nginx) -> ServerA,
testClient (-> Nginx) -> ServerB,
ServerA (-> Nginx) -> ServerB,
testClient (-> Nginx) -> ServerC,
Try this:
Link you server-a and server-b container to nginx with --link server-a:server-a --link server-b:server-b
Update nginx conf file with
location /sa
proxy_pass http://server-a:2001
location /sb
proxy_pass http://server-a:2001
When you link two containers, docker adds "conatiner_name container_ip" to /etc/hosts file of the linking container. So, in this case, server-a and server-b is resolved to their respective container IPs via /etc/hosts file.
And you can access them from http://localhost/sa or http://localhost/sb

Resources