nginx reverse proxy in docker - docker

I'm having a trivial problem with nginx. For a starter, I'm just running nginx and portainer as containers. Portainer is running on port 9000 and the containers are on the same docker network so it's not a visibilty issue. Nginx exposes port 80 and works fine. So does portainer when accessing 9000 directly. I'm mapping the nginx volumes /etc/nginx/nginx.conf:ro and /usr/share/nginx/html:ro locally and they react to changes so I should be hooked up correctly. In my mapped nginx.conf (http section) I have
server {
location /portainer {
proxy_pass http://portainer:9000;
}
}
where portainer is named, well, portainer. I've also tried with an upstream-directive+server but that didn't work either.
When accessing localhost/portainer logs nginx shows
2018/04/30 09:21:32 [error] 7#7: *1 open() "/usr/share/nginx/html/portainer" failed (2: No such file or directory), client: 172.18.0.1, server: localhost, request: "GET /portainer HTTP/1.1", host: "localhost"
which would indicate that the location directive is not even hit(?). I've tried / in various places but to no avail. I'm guessing it's something trivial I'm missing.
Thanks in advance,
Nik

I had to add a trailing slash to both lines:
server {
location /portainer/ {
proxy_pass http://portainer:9000/;
}
}

Try this instead:
location ~* ^/portainer/(.*)$ {
proxy_pass http://portainer:9000/$1$is_args$args;
}
Ref: http://nginx.org/en/docs/http/ngx_http_core_module.html

Related

How to config nginx so that it listens to one port and proxies requests to other port?

I have two docker containers which are running on local Ubuntu machine.
The first one is nodejs service that listen to port 3010, the second one is nginx server on port 2010.
I need to handle all the requests come to port 2010 (matched '/login') and pass them to the first container.
I have nginx.conf as below:
server {
listen 2010;
server_name 127.0.0.1;
root /usr/share/nginx/html;
location ^~ /login {
proxy_pass http://127.0.0.1:3010$request_uri;
}
}
I try to do request from Postman, and get an error:
[error] 29#29: *1 connect() failed (111: Connection refused) while connecting to
upstream, client: 172.17.0.1, server: 127.0.0.1, request: "GET /login HTTP/1.1",
upstream: "http://127.0.0.1:3010/login", host: "127.0.0.1:2010"
Where am I wrong and what am I doing not properly?
127.0.0.1 refers to the nginx server/container itself, not any external services/containers.
Doubtful you're running Nodejs processes within the nginx container, so you need to refer to the other container with service names - https://docs.docker.com/network/bridge/

NGINX and SPRINGBOOT in DOCKER container GOT 502 Bad Gateway

I deployed my springboot project in docker container opening port 8080 as well as an nginx server opening port 80
enter image description here
When I use
curl http://localhost:8080/heya/index
it returns normally
But when I use
curl http://localhost/heya/index
hoping I can reach from nginx proxy,it failed. And I checked the log, it says
*24#24: 11 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: , request: "GET /heya/index HTTP/1.1", upstream: "http://127.0.0.1:8080/heya/index", host: "localhost"
Here is my nginx.conf
enter image description here
I cannot figure it out and need help.
I finally got the answer!!
I ran nginx container and webapp container using host network mode, and it worked.
111: Connection refused) while connecting to upstream
is saying Nginx can't connect to the upstream server.
Your
proxy_pass http://heya;
is telling Nginx that the upstream is talking the HTTP protocol [on the default port 80] on the hostname heya. Unless you're running multiple containers in the same Compose network, it's unlikely that the hostname would be heya.
If the Java application is running on port 8080 inside the same container, talking the HTTP protocol, the correct proxy_pass would be
proxy_pass http://localhost:8080;
(since localhost in the container's view is the container itself).

Nginx + Docker - "no live upstreams while connecting to upstream" with upstream but works fine with proxy_pass

So I've been facing a weird problem, and I'm not sure where the fault is. I'm running a container using docker-compose, and the following nginx configuration works great:
server {
location / {
proxy_pass http://container_name1:1337;
}
}
Where container_name was the name of the service I gave in docker-compose.yml file. It resolves to the IP perfectly and it works. However, the moment I change the above file to this:
upstream backend {
least_conn;
server container_name1:1337;
server container_name2:1337;
}
server {
location / {
proxy_pass http://backend;
}
}
It stops working completely and in error logs I get the following:
2020/03/17 13:16:03 [error] 8#8: *11 no live upstreams while connecting to upstream, client: xxxxxx, server: codedamn.com, request: "GET /HTTP/1.1", upstream: "http://backend/", host: "xxxxx"
Why is that? Is nginx not able to resolve DNS when inside upstream blocks? Could anyone help with this problem?
NOTE: This happens only on production (Ubuntu 16.04), on local (macOS Catalina), the same configuration works fine. I'm totally confused after discovering this.
Update 1: The following works:
upstream backend {
least_conn;
server container_name1:1337;
}
But not with more than one server. Why?!
Alright. Figured it out. This is because docker-compose creates contianers randomly and nginx quickly marks the containers as down (I was deploying this on production when there was some traffic). The app containers weren't ready, but nginx was, so it marked them down and stopped forwarding any traffic.
For now, instead of syncing up docker-compose container creation order (which was a bit hacky, as I discovered), I disabled the failed attempts of nginx to automatically mark service as down by writing:
server app_inst1:1337 max_fails=0;
which lets nginx still forward the traffic to a particular service (and my docker is configured to restart the container in case it crashes), which is fine.

docker consul service discovery

I am working on SOA system and i am using consul service discovery with nginx and registrator. everything is dockerized. the idea is to have all this backend services running inside a docker container to be visible to the consul server and use nginx as a load balancer to route requests to the correct service.
I've set up consul and registrator successfully and tested it using the consul UI. If I spin up a service running inside docker (redis for example), I can see consul discovers the service. The problem i am having is configuring nginx to connect to the upstream servers. I have a bunch of PHP services running inside a container and I want nginx to connect to the correct upstream server and serve the response. however nginx always returns a 502.
here is my nginx.conf file
upstream app-cluster {
least_conn;
{{range service "app-http"}}server {{.Address}}:{{.Port}}
max_fails=3 fail_timeout=60 weight=1;
{{else}}server 127.0.0.1:65535; # force a 502{{end}}
}
server {
listen 80 default_server;
location / {
proxy_pass http://app-cluster;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
}
}
nginx error log :
2018/08/29 09:56:29 [error] 27#27: *7 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.10.24, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:32795/", host: "aci-host-01:8080"
does anyone know any comprehensive guide on this or might have an idea where the problem might be?
thanks in advance

NGINX reverse proxy to docker applications

I am currently learning to set up nginx but I am already having an issue. There are gitlab and nextcloud running on my vps and both are accessible with the right port. Therefore I created a nginx config with a simple proxy_pass command but I always reveice 502 Bad Gateway.
Nextcloud, Gitlab and NGINX are docker container and NGINX has port 80 opened. The remaining two containers are having port 3000 and 3100 opened.
/etc/nginx/conf.d/gitlab.domain.com.conf
upstream gitlab {
server x.x.x.x:3000;
}
server {
listen 80;
server_name gitlab.domain.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://gitlab/;
}
}
/var/logs/error.log
2018/04/12 08:10:41 [error] 7#7: *1 connect() failed (113: Host is unreachable) while connecting to upstream, client: xx.201.226.19, server: gitlab.domain.com, request: "GET / HTTP/1.1", upstream: "http://xxx.249.7.15:3000/", host: "gitlab.domain.com"
2018/04/12 08:10:42 [error] 7#7: *1 connect() failed (113: Host is unreachable) while connecting to upstream, client: xx.201.226.19, server: gitlab.domain.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://xxx.249.7.15:3000/favicon.ico", host: "gitlab.domain.com", referrer: "http://gitlab.domain.com/
What is wrong with my configuration?
I think you could get away with a config way simpler than that.
Maybe something like this:
http {
...
server {
listen 80;
charset utf-8;
...
location / {
proxy_pass http://gitlab:3000;
}
}
}
I assume you are using docker's internal DNS for accessing the containers for example gitlab points to the gitlab containers internal IP. If that is the case then you can open up a container and try ping the gitlab container from the other container.
For example you can ping the gitlab container from the nginx container like this:
$ docker ps (use this to get the container id)
Now do:
$ docker exec -it <container_id_for_nginx_container> bash
# apt-get update -y
# apt-get install iputils-ping -y
# ping -c 2 gitlab
If you can't ping it then it means the containers have trouble communicating with each other. Are you using docker-compose? If you are then I would suggest look at the "links" keyword which is used to link containers that should be able to communicate with each other. So for example you would probably link the gitlab container to postgresql.
Let me know if this helps.
Another option that uses the advantage that your Docker containers are just processes in an isolated own control group is to bind each process (container) to a port on the host network (instead of an isolated network group). This bypasses Docker routing, so beware of the caveat that ports may not overlap on the host machine (no different than any normal process sharing the same host network.
You mentioned running Nginx and Nextcloud (I assume you are using the nextcloud fpm image because of FastCGI support). In this case, I had to do the following on my Arch Linux machine:
/usr/share/webapps/nextcloud is bounded (bind mounted) to the container at /var/www/html.
The UID of both host and container process must be the same (in my case, user host http and container www-data are UID=33)
The 443 server block in nginx.conf must set root to the host's nextcloud path, root /usr/share/webapps/nextcloud;.
The FastCGI script path for each server block that calls php-fpm over FastCGI must be adjusted to refer to the Docker container's Nextcloud base path, fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;. In other words, you cannot use $document_root as you normally would, because this points to the host's nextcloud root path.
Optional: Adjust paths to database and Redis in the config.php file to not use localhost, rather the hostname of the host machine. localhost seems to reference the container's host despite having been bound to the host machine's main network.

Resources