How to assign domain names to containers in Docker? - docker

I am reading a lot these days about how to setup and run a docker stack. But one of the things I am always missing out on is how to setup that particular containers respond to access through their domain name and not just their container name using docker dns.
What I mean is, that say I have a microservice which is accessible externally, for example: users.mycompany.com, it will go through to the microservice container which is handling the users api
Then when I try to access the customer-list.mycompany.com, it will go through to the microservice container which is handling the customer lists
Of course, using docker dns I can access them and link them into a docker network, but this only really works for wanting to access container to container, but not internet to container.
Does anybody know how I should do that? Or the best way to set that up.

So, you need to use the concept of port publishing, so that a port from your container is accessible via a port from your host. Using this, you can can setup a simple proxy_pass from an Nginx that will redirect users.mycompany.com to myhost:1337 (assuming that you published your port to 1337)
So, if you want to do this, you'll need to setup your container to expose a certain port using:
docker run -d -p 5000:5000 training/webapp # publish image port 5000 to host port 5000
You can then from your host curl your localhost:5000 to access the container.
curl -X GET localhost:5000
If you want to setup a domain name in front, you'll need to have a webserver instance that allows you to proxy_pass your hostname to your container.
i.e. in Nginx:
server {
listen 80;
server_name users.mycompany.com;
location / {
proxy_pass http://localhost:5000;
}
}
I would advise you to follow this tutorial, and maybe check the docker run reference.

For all I know, Docker doesn't provide this feature out of the box. But surely there are several workarounds here. In fact you need to deploy a DNS on your host that will distinguish the containers and resolve their domain names in dynamical IPs. So you could give a try to:
Deploy some of Docker-aware DNS solutions (I suggest you to use SkyDNSv1 / SkyDock);
Configure your host to work with this DNS (by default SkyDNS makes the containers know each other by name, but the host is not aware of it);
Run your containers with explicit --hostname (you will probably use scheme container_name.image_name.dev.skydns.local)
You can skip step #2 and run your browser inside container too. It will discover the web application container by hostname.

Here is a one solution with the nginx and docker-compose:
users.mycompany.com is in nginx container on port 8097
customer-list.mycompany.com is in nginx container on port 8098
Nginx configuration:
server {
listen 0.0.0.0:8097;
root /root/for/users.mycompany.com
...
}
server {
listen 0.0.0.0:8098;
root /root/for/customer-list.mycompany.com
...
}
server {
listen 0.0.0.0:80;
server_name users.mycompany.com;
location / {
proxy_pass http://0.0.0.0:8097;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
server {
listen 0.0.0.0:80;
server_name customer-list.mycompany.com;
location / {
proxy_pass http://0.0.0.0:8098;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Docker compose configuration :
services:
nginx:
container_name: MY_nginx
build:
context: .docker/nginx
ports:
- '80:80'
...

Related

Configuring Nginx with Unicorn Flask within Docker

I'm having an issue configuring my nginx.conf for the container (server) container.
Without nginx: I can properly access the app through gunicorn.
With nginx: I get 502 Bad Gateway
My first question would be, should I even have nginx on the container(server) if I have ingress nginx?
My second question is, why is this configuration not working. Here is my docker file for container (server)
Dockerfile
FROM python:3 as builder
WORKDIR '/app'
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
ENTRYPOINT [ "gunicorn", "-b", "0.0.0.0:8000", "run:app" ]
FROM nginx
EXPOSE 80
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
nginx.conf
upstream flask_server {
server localhost:8000;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://flask_server;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Command
# build
docker build -t test/server .
# run
docker run -p 80:80 test/server
I suppose my issue is the upstream localhost is not working. When I'm developing locally is there no way for me to test this container specifically through docker? Or do I have to test locally with docker-compose and put nginx in a separate container?
I know, that post could be outdated, but it is still searching by Google :)
My second question is, why is this configuration not working. Here is my docker file for container (server)
So, as I can see, your specified proxy_pass as http://flask_server.
That means, flask_server hostname should be the resolvable name.
This name could be just the name of your Docker container, but your Dockerfile or docker run does not contain such options.
This name could be group of servers, but in this case, you have to define this directive
upstream flask_server {
server <hostname1>;
server <hostname2>;
}
(c) https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass
If a domain name resolves to several addresses, all of them will be used in a round-robin fashion. In addition, an address can be specified as a server group.
To sum up: to fix 502 error (which means connection timeout to the upstream) you can use 2 ways:
change proxy_pass http://flask_server; to proxy_pass http://0.0.0.0:8000;
OR
define upstream directive according manual
My first question would be, should I even have nginx on the container(server) if I have ingress nginx?
In my opinion - I should not.
You can deploy Nginx on the container(server) just in case you want (in the future) load-balance traffic between multiply gunicorn instances or want to provide some HTTP authentication or access restrictions.

Docker port forwarding on EC2 by domain name?

I have 2 dockers containers running on my EC2 instance:
Docker1: Wordpress website running with PHP server mapped to port 8081 of EC2 instance.
Docker2: Portal created on Angular running with NGINX mapped to port 8082 of EC2 instance.
I want to use the same EC2 instance for my domain and subdomain xyz.com and portal.xyz.com on the same port 80.
Ideally, if the request comes from xyz.com, it should redirect to Docker1 running on 8081 and if it is from portal.xyz.com, it should be redirected to Docker2 running on 8082.
Is it feasible and if yes, how? I do not want to spawn 2 EC2 instances for this and both have to be mapped to HTTP on port 80.
Using multiple load balancers and target groups can solve your problem. https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ecs-services-now-support-multiple-load-balancer-target-groups/
You can set up both load balancers to listen on HTTP and target your one ECS instance on different ports. After that, setting up the routes in Route53 will be straight forward.
I had done something similar on a VPS server, technically it should work on an ec2 instance as well.
I created a new docker network 'proxy-network' (Note: You can do without creating a network and just proxy to localhost:8081 and localhost:8082. This is just cleaner)
Launch all the application servers in that network with proper names (eg: wordpress, angular). Use --name in run command or conatiner_name in docker-compose.
Launch a new nginx server map host port 80 and 443(if you need https to work). I used nginx:latest image. Created a new default.conf and replace /etc/nginx/conf.d/default.conf in container.
Sample proxy.conf should like this:
server {
listen 80;
server_name domain1.com www.domain1.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://wordpress;
}
}
server {
listen 80 default_server;
server_name domain2.com www.domain2.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://angular;
}
}
Once you update the alias records in domain registrar works like a charm. Hope it helps. Good luck.

Expose docker port to internal network only

Running numerous dockers right now on a new build for a homelab server and trying to make sure everything is locked down and secure. I use the server for a variety of things, both requiring access from the outside world (nextcloud) and things that I will only access from my internal network (plex). Of course the server is behind a router that limits open ports but looking for additional security - I would like to restrict those dockers that I want to only access via internal network, to 192.168.0.0/24. That way, if somehow a port became open on my router, it would not be exposed (am I being to paranoid?).
Currently docker-compose files are exposing ports via:
....
ports:
- 8989:8989
....
This is of course works fine but is accessible to the world should I open the port on my router. I know i can bind to localhost via
....
ports:
- 127.0.0.1:8989:8989
....
But that doesn't help me when I'm trying to access the docker from my internal network. I've read numerous articles regarding docker networks and various flags and also read about possibility iptables solution.
Any guidance is much appreciated.
Thanks,
Simply do not declare any ports in docker-compose, they are automatically visible between containers.
I use an elasticsearch container in this way and a separate kibana can connect to it by the server name declated on the yml.
if somehow a port became open on my router, it would not be exposed
Using this procedure the ports are never visible outside the docker environment (i.e. outside == in your local network).
If your concern is that ports are published in your LAN when doing the procedure I told you, they are not.
you are actually very close with
ports:
- 127.0.0.1:8989:8989
as with this it is accessible locally on your server, fun enough, your bind to localhost trick is exactly what i was looking for my own setup xD
from this point there are actually a couple of ways to set it up so that you can access it on your local network.
SSH Tunneling
the first one is the one i'm using on my own setup: ssh forwarding
you can, if you haven't already, set up an .ssh/config file to forward localhost ports to your computer. taking your example into account the syntax is as follows
Host some-hostname
HostName 192.168.x.x
User user-of-server
LocalForward 8989 127.0.0.1:8989
some-hostname is a shortname you can choose, user-of-server is the actual user you set up to log in with, 192.168.x.x is the actual local ip address of your server and you can also include a IdentityFile /path/to/ssh/key. with this you can run ssh some-hostname to ssh into your server from any computer on your local network and your server will be available at localhost:8989 on that specific computer
Reverse Proxy
the second is a reverse proxy like nginx, this too can be run in a docker container and you could bind it to any port like say for example to 6443 and you can mount its config file into the container with
volumes:
- 'config:/etc/nginx/conf.d'
ports:
- 6443:443
volumes:
config:
driver: local
driver_opts:
type: none;
o: bind
device: "./config"
the in the ./config/defaul.conf you could set up something like
server {
listen 443 ssl http2;
server_name 192.168.x.x;
ssl_certificate /etc/letsencrypt/signed_chain.crt;
ssl_certificate_key /etc/letencrypt/domain.key
include /etc/nginx/includes/ssl.conf
location /{
### force timeouts if one of backend is died ##
### such died, many backend, very timeouts ##
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
### Set headers ####
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host:server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto $scheme;
add_header Front-End-Https on;
proxy_buffering off;
proxy_pass http://127.0.0.1:8989
}
then it should be available on and only on 192.168.x.x:6443

NGINX reverse proxy to docker applications

I am currently learning to set up nginx but I am already having an issue. There are gitlab and nextcloud running on my vps and both are accessible with the right port. Therefore I created a nginx config with a simple proxy_pass command but I always reveice 502 Bad Gateway.
Nextcloud, Gitlab and NGINX are docker container and NGINX has port 80 opened. The remaining two containers are having port 3000 and 3100 opened.
/etc/nginx/conf.d/gitlab.domain.com.conf
upstream gitlab {
server x.x.x.x:3000;
}
server {
listen 80;
server_name gitlab.domain.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://gitlab/;
}
}
/var/logs/error.log
2018/04/12 08:10:41 [error] 7#7: *1 connect() failed (113: Host is unreachable) while connecting to upstream, client: xx.201.226.19, server: gitlab.domain.com, request: "GET / HTTP/1.1", upstream: "http://xxx.249.7.15:3000/", host: "gitlab.domain.com"
2018/04/12 08:10:42 [error] 7#7: *1 connect() failed (113: Host is unreachable) while connecting to upstream, client: xx.201.226.19, server: gitlab.domain.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://xxx.249.7.15:3000/favicon.ico", host: "gitlab.domain.com", referrer: "http://gitlab.domain.com/
What is wrong with my configuration?
I think you could get away with a config way simpler than that.
Maybe something like this:
http {
...
server {
listen 80;
charset utf-8;
...
location / {
proxy_pass http://gitlab:3000;
}
}
}
I assume you are using docker's internal DNS for accessing the containers for example gitlab points to the gitlab containers internal IP. If that is the case then you can open up a container and try ping the gitlab container from the other container.
For example you can ping the gitlab container from the nginx container like this:
$ docker ps (use this to get the container id)
Now do:
$ docker exec -it <container_id_for_nginx_container> bash
# apt-get update -y
# apt-get install iputils-ping -y
# ping -c 2 gitlab
If you can't ping it then it means the containers have trouble communicating with each other. Are you using docker-compose? If you are then I would suggest look at the "links" keyword which is used to link containers that should be able to communicate with each other. So for example you would probably link the gitlab container to postgresql.
Let me know if this helps.
Another option that uses the advantage that your Docker containers are just processes in an isolated own control group is to bind each process (container) to a port on the host network (instead of an isolated network group). This bypasses Docker routing, so beware of the caveat that ports may not overlap on the host machine (no different than any normal process sharing the same host network.
You mentioned running Nginx and Nextcloud (I assume you are using the nextcloud fpm image because of FastCGI support). In this case, I had to do the following on my Arch Linux machine:
/usr/share/webapps/nextcloud is bounded (bind mounted) to the container at /var/www/html.
The UID of both host and container process must be the same (in my case, user host http and container www-data are UID=33)
The 443 server block in nginx.conf must set root to the host's nextcloud path, root /usr/share/webapps/nextcloud;.
The FastCGI script path for each server block that calls php-fpm over FastCGI must be adjusted to refer to the Docker container's Nextcloud base path, fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;. In other words, you cannot use $document_root as you normally would, because this points to the host's nextcloud root path.
Optional: Adjust paths to database and Redis in the config.php file to not use localhost, rather the hostname of the host machine. localhost seems to reference the container's host despite having been bound to the host machine's main network.

How to configure Nginx with gunicorn and bokeh serve

I want to serve a flask app that uses embedded bokeh serve from a server on my local network. To illustrate I made an example using the bokeh serve example and a docker image to replicate the server. The docker image runs Nginx and Gunicorn. I think there is a problem with my nginx configuration routing the requests to the /bkapp uri.
I have detailed the problem and provided all source code in the following git repo
I have started a discussion on bokeh google group
Single Container
In order to reduce the complexity of running nginx in its own container I built this image that runs nginx in same container as the web app
Installation
NOTE: I am using Docker version 17.09.0-ce
Download or clone repo and navigate to this directory (single_container).
# as root
docker build -f Dockerfile -t single_container .
build
start a terminal session in new container
# as root
docker run -ti single_container:latest
In new container start nginx
nginx
now start gunicorn
gunicorn -w 1 -b :8000 flask_gunicorn_embed:app
start
in a separate terminal (on host machine) find the IP address of the single_container container you are running
#as root
docker ps
# then do copy CONTAINER ID and inspect it
docker inspect [CONTIANER ID] | grep IPAddress
find
PROBLEM
Using IP found above (with container running) check out in firefox with inspector.
As you can in screenshot above (see screenshots folder "single_container_broken.png" for raw the get request just hangs
broke_1
I can verify that nginx is serving the static files though by navigating to /bkapp/static/ (see bokeh_recipe/single_container/nginx/bokeh_app.conf for config)
static
Another oddidy is that I try to hit the embedded bokeh server directly (with /bkapp/) but i end up with a 400 (denied?)
bkapp
Note about app
to reduce complexity of dynamically assigning available ports to tornado workers I hard coded in 46518 for port to talk to bokeh serve
nginx config
I know you could just look at bokeh_recipe/single_container/nginx/bokeh_app.conf but I want to show it here.
I think I need to config nginx to make explicit that the "request" to bkapp to the 127.0.0.1:46518 is originating FROM the server not the client.
## Define the parameters for a specific virtual host/server
server {
# define what port to listen on
listen 80;
# Define the specified charset to the “Content-Type” response header field
charset utf-8;
# Configure NGINX to deliver static content from the specified folder
# NOTE this should be a docker volume shared from the bokehrecipe_web container (css, js for bokeh serve)
location /bkapp/static/ {
alias /home/flask/app/web/static/;
autoindex on;
}
# Configure NGINX to reverse proxy HTTP requests to the upstream server (Gunicorn (WSGI server))
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host:$server_port;
proxy_buffering off;
}
# deal with the http://127.0.0.1/bkapp/autoload.js (note hard coded port for now)
location /bkapp/ {
proxy_pass http://127.0.0.1:46518;
}
}

Resources