NGINX reverse proxy to docker applications - docker

I am currently learning to set up nginx but I am already having an issue. There are gitlab and nextcloud running on my vps and both are accessible with the right port. Therefore I created a nginx config with a simple proxy_pass command but I always reveice 502 Bad Gateway.
Nextcloud, Gitlab and NGINX are docker container and NGINX has port 80 opened. The remaining two containers are having port 3000 and 3100 opened.
/etc/nginx/conf.d/gitlab.domain.com.conf
upstream gitlab {
server x.x.x.x:3000;
}
server {
listen 80;
server_name gitlab.domain.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://gitlab/;
}
}
/var/logs/error.log
2018/04/12 08:10:41 [error] 7#7: *1 connect() failed (113: Host is unreachable) while connecting to upstream, client: xx.201.226.19, server: gitlab.domain.com, request: "GET / HTTP/1.1", upstream: "http://xxx.249.7.15:3000/", host: "gitlab.domain.com"
2018/04/12 08:10:42 [error] 7#7: *1 connect() failed (113: Host is unreachable) while connecting to upstream, client: xx.201.226.19, server: gitlab.domain.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://xxx.249.7.15:3000/favicon.ico", host: "gitlab.domain.com", referrer: "http://gitlab.domain.com/
What is wrong with my configuration?

I think you could get away with a config way simpler than that.
Maybe something like this:
http {
...
server {
listen 80;
charset utf-8;
...
location / {
proxy_pass http://gitlab:3000;
}
}
}
I assume you are using docker's internal DNS for accessing the containers for example gitlab points to the gitlab containers internal IP. If that is the case then you can open up a container and try ping the gitlab container from the other container.
For example you can ping the gitlab container from the nginx container like this:
$ docker ps (use this to get the container id)
Now do:
$ docker exec -it <container_id_for_nginx_container> bash
# apt-get update -y
# apt-get install iputils-ping -y
# ping -c 2 gitlab
If you can't ping it then it means the containers have trouble communicating with each other. Are you using docker-compose? If you are then I would suggest look at the "links" keyword which is used to link containers that should be able to communicate with each other. So for example you would probably link the gitlab container to postgresql.
Let me know if this helps.

Another option that uses the advantage that your Docker containers are just processes in an isolated own control group is to bind each process (container) to a port on the host network (instead of an isolated network group). This bypasses Docker routing, so beware of the caveat that ports may not overlap on the host machine (no different than any normal process sharing the same host network.
You mentioned running Nginx and Nextcloud (I assume you are using the nextcloud fpm image because of FastCGI support). In this case, I had to do the following on my Arch Linux machine:
/usr/share/webapps/nextcloud is bounded (bind mounted) to the container at /var/www/html.
The UID of both host and container process must be the same (in my case, user host http and container www-data are UID=33)
The 443 server block in nginx.conf must set root to the host's nextcloud path, root /usr/share/webapps/nextcloud;.
The FastCGI script path for each server block that calls php-fpm over FastCGI must be adjusted to refer to the Docker container's Nextcloud base path, fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;. In other words, you cannot use $document_root as you normally would, because this points to the host's nextcloud root path.
Optional: Adjust paths to database and Redis in the config.php file to not use localhost, rather the hostname of the host machine. localhost seems to reference the container's host despite having been bound to the host machine's main network.

Related

Connection refused: when uwsgi and nginx in different containers

I am trying to setup two docker containers(yes separate without docker-compose): one with nginx and one with uwsgi with basic flask app.
I run containers in same network within docker
My nginx config for site added/linked to sites-enabled(everything else is default):
server {
listen 80;
server_name 127.0.0.1;
location / {
include uwsgi_params;
uwsgi_pass 0.0.0.0:8080;
}
}
My uwsgi.ini
[uwsgi]
module = app:app
master = true
processes = 2
socket = 0.0.0.0:8080
uwsgi entry point in docker looks like
.local/bin/uwsgi --ini uwsgi.ini
Containers run fine on their own - uwsgi receives request on 8080 and nginx receives expected requests. How ever when I try to access 127.0.0.1 i get 502 status code and nginx logs error:
1 connect() failed (111: Connection refused) while connecting to
upstream, client: 192.168.4.1, server: 127.0.0.1, request: "GET /
HTTP/1.1", upstream: "uwsgi://0.0.0.0:8080", host: "127.0.0.1"
By googling i find solution that rather use one container and some_socket.sock as file or use docker compose. Apparently problem with permissions, but I do not know how to solve them or diagnose.
I launch containers with these commands:
docker run --network app_network --name nginx --rm -p 80:80 my_nginx
docker run --network app_network --name flaskapp --rm -p 8080:8080 my_uwsgi
EDIT
You can simply use the hostname of the docker container in the uwsgi_pass directive as both docker containers are on the same subnet.
location / {
include uwsgi_params;
uwsgi_pass flaskapp:8080;
}
0.0.0.0 isn't the IP address of the server, it essentially tells the server to be hosted on every IP that the device has allocated.
To connect to it from nginx, you will need to use the IP address of the container instead.
You can find the IP address of the container running uWsgi with the following command:
docker inspect CONTAINER_ID
Where CONTAINER_ID is the ID of the container you started uwsgi in.
From here you can update the nginx config as follows:
uwsgi_pass IP_ADDRESS:8080;
Where IP_ADDRESS is the one you found from the command above
You can also set the ip address of the container when you start it with the following option
--ip <ip>
Be careful, however, to ensure that the IP address you set is in the same subnet as the standard IP's assigned.

docker consul service discovery

I am working on SOA system and i am using consul service discovery with nginx and registrator. everything is dockerized. the idea is to have all this backend services running inside a docker container to be visible to the consul server and use nginx as a load balancer to route requests to the correct service.
I've set up consul and registrator successfully and tested it using the consul UI. If I spin up a service running inside docker (redis for example), I can see consul discovers the service. The problem i am having is configuring nginx to connect to the upstream servers. I have a bunch of PHP services running inside a container and I want nginx to connect to the correct upstream server and serve the response. however nginx always returns a 502.
here is my nginx.conf file
upstream app-cluster {
least_conn;
{{range service "app-http"}}server {{.Address}}:{{.Port}}
max_fails=3 fail_timeout=60 weight=1;
{{else}}server 127.0.0.1:65535; # force a 502{{end}}
}
server {
listen 80 default_server;
location / {
proxy_pass http://app-cluster;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
}
}
nginx error log :
2018/08/29 09:56:29 [error] 27#27: *7 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.10.24, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:32795/", host: "aci-host-01:8080"
does anyone know any comprehensive guide on this or might have an idea where the problem might be?
thanks in advance

nginx reverse proxy in docker

I'm having a trivial problem with nginx. For a starter, I'm just running nginx and portainer as containers. Portainer is running on port 9000 and the containers are on the same docker network so it's not a visibilty issue. Nginx exposes port 80 and works fine. So does portainer when accessing 9000 directly. I'm mapping the nginx volumes /etc/nginx/nginx.conf:ro and /usr/share/nginx/html:ro locally and they react to changes so I should be hooked up correctly. In my mapped nginx.conf (http section) I have
server {
location /portainer {
proxy_pass http://portainer:9000;
}
}
where portainer is named, well, portainer. I've also tried with an upstream-directive+server but that didn't work either.
When accessing localhost/portainer logs nginx shows
2018/04/30 09:21:32 [error] 7#7: *1 open() "/usr/share/nginx/html/portainer" failed (2: No such file or directory), client: 172.18.0.1, server: localhost, request: "GET /portainer HTTP/1.1", host: "localhost"
which would indicate that the location directive is not even hit(?). I've tried / in various places but to no avail. I'm guessing it's something trivial I'm missing.
Thanks in advance,
Nik
I had to add a trailing slash to both lines:
server {
location /portainer/ {
proxy_pass http://portainer:9000/;
}
}
Try this instead:
location ~* ^/portainer/(.*)$ {
proxy_pass http://portainer:9000/$1$is_args$args;
}
Ref: http://nginx.org/en/docs/http/ngx_http_core_module.html

How to configure Nginx with gunicorn and bokeh serve

I want to serve a flask app that uses embedded bokeh serve from a server on my local network. To illustrate I made an example using the bokeh serve example and a docker image to replicate the server. The docker image runs Nginx and Gunicorn. I think there is a problem with my nginx configuration routing the requests to the /bkapp uri.
I have detailed the problem and provided all source code in the following git repo
I have started a discussion on bokeh google group
Single Container
In order to reduce the complexity of running nginx in its own container I built this image that runs nginx in same container as the web app
Installation
NOTE: I am using Docker version 17.09.0-ce
Download or clone repo and navigate to this directory (single_container).
# as root
docker build -f Dockerfile -t single_container .
build
start a terminal session in new container
# as root
docker run -ti single_container:latest
In new container start nginx
nginx
now start gunicorn
gunicorn -w 1 -b :8000 flask_gunicorn_embed:app
start
in a separate terminal (on host machine) find the IP address of the single_container container you are running
#as root
docker ps
# then do copy CONTAINER ID and inspect it
docker inspect [CONTIANER ID] | grep IPAddress
find
PROBLEM
Using IP found above (with container running) check out in firefox with inspector.
As you can in screenshot above (see screenshots folder "single_container_broken.png" for raw the get request just hangs
broke_1
I can verify that nginx is serving the static files though by navigating to /bkapp/static/ (see bokeh_recipe/single_container/nginx/bokeh_app.conf for config)
static
Another oddidy is that I try to hit the embedded bokeh server directly (with /bkapp/) but i end up with a 400 (denied?)
bkapp
Note about app
to reduce complexity of dynamically assigning available ports to tornado workers I hard coded in 46518 for port to talk to bokeh serve
nginx config
I know you could just look at bokeh_recipe/single_container/nginx/bokeh_app.conf but I want to show it here.
I think I need to config nginx to make explicit that the "request" to bkapp to the 127.0.0.1:46518 is originating FROM the server not the client.
## Define the parameters for a specific virtual host/server
server {
# define what port to listen on
listen 80;
# Define the specified charset to the “Content-Type” response header field
charset utf-8;
# Configure NGINX to deliver static content from the specified folder
# NOTE this should be a docker volume shared from the bokehrecipe_web container (css, js for bokeh serve)
location /bkapp/static/ {
alias /home/flask/app/web/static/;
autoindex on;
}
# Configure NGINX to reverse proxy HTTP requests to the upstream server (Gunicorn (WSGI server))
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host:$server_port;
proxy_buffering off;
}
# deal with the http://127.0.0.1/bkapp/autoload.js (note hard coded port for now)
location /bkapp/ {
proxy_pass http://127.0.0.1:46518;
}
}

How to assign domain names to containers in Docker?

I am reading a lot these days about how to setup and run a docker stack. But one of the things I am always missing out on is how to setup that particular containers respond to access through their domain name and not just their container name using docker dns.
What I mean is, that say I have a microservice which is accessible externally, for example: users.mycompany.com, it will go through to the microservice container which is handling the users api
Then when I try to access the customer-list.mycompany.com, it will go through to the microservice container which is handling the customer lists
Of course, using docker dns I can access them and link them into a docker network, but this only really works for wanting to access container to container, but not internet to container.
Does anybody know how I should do that? Or the best way to set that up.
So, you need to use the concept of port publishing, so that a port from your container is accessible via a port from your host. Using this, you can can setup a simple proxy_pass from an Nginx that will redirect users.mycompany.com to myhost:1337 (assuming that you published your port to 1337)
So, if you want to do this, you'll need to setup your container to expose a certain port using:
docker run -d -p 5000:5000 training/webapp # publish image port 5000 to host port 5000
You can then from your host curl your localhost:5000 to access the container.
curl -X GET localhost:5000
If you want to setup a domain name in front, you'll need to have a webserver instance that allows you to proxy_pass your hostname to your container.
i.e. in Nginx:
server {
listen 80;
server_name users.mycompany.com;
location / {
proxy_pass http://localhost:5000;
}
}
I would advise you to follow this tutorial, and maybe check the docker run reference.
For all I know, Docker doesn't provide this feature out of the box. But surely there are several workarounds here. In fact you need to deploy a DNS on your host that will distinguish the containers and resolve their domain names in dynamical IPs. So you could give a try to:
Deploy some of Docker-aware DNS solutions (I suggest you to use SkyDNSv1 / SkyDock);
Configure your host to work with this DNS (by default SkyDNS makes the containers know each other by name, but the host is not aware of it);
Run your containers with explicit --hostname (you will probably use scheme container_name.image_name.dev.skydns.local)
You can skip step #2 and run your browser inside container too. It will discover the web application container by hostname.
Here is a one solution with the nginx and docker-compose:
users.mycompany.com is in nginx container on port 8097
customer-list.mycompany.com is in nginx container on port 8098
Nginx configuration:
server {
listen 0.0.0.0:8097;
root /root/for/users.mycompany.com
...
}
server {
listen 0.0.0.0:8098;
root /root/for/customer-list.mycompany.com
...
}
server {
listen 0.0.0.0:80;
server_name users.mycompany.com;
location / {
proxy_pass http://0.0.0.0:8097;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
server {
listen 0.0.0.0:80;
server_name customer-list.mycompany.com;
location / {
proxy_pass http://0.0.0.0:8098;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Docker compose configuration :
services:
nginx:
container_name: MY_nginx
build:
context: .docker/nginx
ports:
- '80:80'
...

Resources