docker consul service discovery - docker

I am working on SOA system and i am using consul service discovery with nginx and registrator. everything is dockerized. the idea is to have all this backend services running inside a docker container to be visible to the consul server and use nginx as a load balancer to route requests to the correct service.
I've set up consul and registrator successfully and tested it using the consul UI. If I spin up a service running inside docker (redis for example), I can see consul discovers the service. The problem i am having is configuring nginx to connect to the upstream servers. I have a bunch of PHP services running inside a container and I want nginx to connect to the correct upstream server and serve the response. however nginx always returns a 502.
here is my nginx.conf file
upstream app-cluster {
least_conn;
{{range service "app-http"}}server {{.Address}}:{{.Port}}
max_fails=3 fail_timeout=60 weight=1;
{{else}}server 127.0.0.1:65535; # force a 502{{end}}
}
server {
listen 80 default_server;
location / {
proxy_pass http://app-cluster;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
}
}
nginx error log :
2018/08/29 09:56:29 [error] 27#27: *7 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.10.24, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:32795/", host: "aci-host-01:8080"
does anyone know any comprehensive guide on this or might have an idea where the problem might be?
thanks in advance

Related

How to config nginx so that it listens to one port and proxies requests to other port?

I have two docker containers which are running on local Ubuntu machine.
The first one is nodejs service that listen to port 3010, the second one is nginx server on port 2010.
I need to handle all the requests come to port 2010 (matched '/login') and pass them to the first container.
I have nginx.conf as below:
server {
listen 2010;
server_name 127.0.0.1;
root /usr/share/nginx/html;
location ^~ /login {
proxy_pass http://127.0.0.1:3010$request_uri;
}
}
I try to do request from Postman, and get an error:
[error] 29#29: *1 connect() failed (111: Connection refused) while connecting to
upstream, client: 172.17.0.1, server: 127.0.0.1, request: "GET /login HTTP/1.1",
upstream: "http://127.0.0.1:3010/login", host: "127.0.0.1:2010"
Where am I wrong and what am I doing not properly?
127.0.0.1 refers to the nginx server/container itself, not any external services/containers.
Doubtful you're running Nodejs processes within the nginx container, so you need to refer to the other container with service names - https://docs.docker.com/network/bridge/

NGINX and SPRINGBOOT in DOCKER container GOT 502 Bad Gateway

I deployed my springboot project in docker container opening port 8080 as well as an nginx server opening port 80
enter image description here
When I use
curl http://localhost:8080/heya/index
it returns normally
But when I use
curl http://localhost/heya/index
hoping I can reach from nginx proxy,it failed. And I checked the log, it says
*24#24: 11 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: , request: "GET /heya/index HTTP/1.1", upstream: "http://127.0.0.1:8080/heya/index", host: "localhost"
Here is my nginx.conf
enter image description here
I cannot figure it out and need help.
I finally got the answer!!
I ran nginx container and webapp container using host network mode, and it worked.
111: Connection refused) while connecting to upstream
is saying Nginx can't connect to the upstream server.
Your
proxy_pass http://heya;
is telling Nginx that the upstream is talking the HTTP protocol [on the default port 80] on the hostname heya. Unless you're running multiple containers in the same Compose network, it's unlikely that the hostname would be heya.
If the Java application is running on port 8080 inside the same container, talking the HTTP protocol, the correct proxy_pass would be
proxy_pass http://localhost:8080;
(since localhost in the container's view is the container itself).

Nginx Reverse Proxy to Docker 502 Bad Gateway

Spent all week on this one and tried every related stackoverflow post. Thanks for being here.
I have an Ubuntu VM running nginx with reverse proxies pointing to various docker daemons concurrently running on different ports. All my static sites work flawlessly. However, I have one container running an expressjs app.
I get responses after restarting the server for about an hour. Then I get 502 Bad Gateway. A refresh brings the site back up for approx 5 seconds until it permanently goes down. This is reproducible.
The docker container has express listening on 0.0.0.0:8090 inside the container
The container is running
02e1917991e6 docker/express-site "docker-entrypoint.s…" About an hour ago Up About an hour 127.0.0.1:8090->8090/tcp express-site
The 8090 port is EXPOSEd in the Dockerfile.
I tried other ports.
When down, I can curl the site from within the container when inspecting.
When down, curling the site from within the VM yields
curl: (52) Empty reply from server
Memory and CPU usage within the container and within the VM barely reach 5%.
Site usually has SSL but tried http as well.
Tried various nginx proxy settings (see config below)
Using out-of-the box nginx.conf
Considering that it might be related to a timeout or docker network settings...
My site-available config file looks like:
server {
server_name example.com www.example.com;
location / {
proxy_pass http://127.0.0.1:8090;
#proxy_set_header Host $host;
#proxy_buffering off;
#proxy_buffer_size 16k;
#proxy_busy_buffers_size 24k;
#proxy_buffers 64 4k;
}
listen 80;
listen [::]:80;
#listen 443 ssl; # managed by Certbot
#ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem; # managed by Certbot
#ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem; # managed by Certbot
#include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
#ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
Nginx Error Log shows:
2021/01/02 23:50:00 [error] 13901#13901: *46 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: ***.**.**.***, server: example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:8090/favicon.ico", host: "www.example.com", referrer: "http://www.example.com"
Anyone else have ideas?
Didn't get much feedback, but I did more research and the issue is now stable so I wanted to post my findings.
I have isolated the issue with the docker container. Nginx works fine with the same app running on the VM directly.
I updated my docker container image from node:12-alpine to node:14-alpine. The site has been up for 42 hours without issue.
If it randomly fails again, then it's probably due to load.
I hope this solves someone's issue.
Update 2021-10-24
The same issue started and I've narrowed it down to the port and/or docker on my version of Ubuntu. May I recommend...
changing the port
rebooting your PC
installing the latest OS and docker updates

nginx reverse proxy in docker

I'm having a trivial problem with nginx. For a starter, I'm just running nginx and portainer as containers. Portainer is running on port 9000 and the containers are on the same docker network so it's not a visibilty issue. Nginx exposes port 80 and works fine. So does portainer when accessing 9000 directly. I'm mapping the nginx volumes /etc/nginx/nginx.conf:ro and /usr/share/nginx/html:ro locally and they react to changes so I should be hooked up correctly. In my mapped nginx.conf (http section) I have
server {
location /portainer {
proxy_pass http://portainer:9000;
}
}
where portainer is named, well, portainer. I've also tried with an upstream-directive+server but that didn't work either.
When accessing localhost/portainer logs nginx shows
2018/04/30 09:21:32 [error] 7#7: *1 open() "/usr/share/nginx/html/portainer" failed (2: No such file or directory), client: 172.18.0.1, server: localhost, request: "GET /portainer HTTP/1.1", host: "localhost"
which would indicate that the location directive is not even hit(?). I've tried / in various places but to no avail. I'm guessing it's something trivial I'm missing.
Thanks in advance,
Nik
I had to add a trailing slash to both lines:
server {
location /portainer/ {
proxy_pass http://portainer:9000/;
}
}
Try this instead:
location ~* ^/portainer/(.*)$ {
proxy_pass http://portainer:9000/$1$is_args$args;
}
Ref: http://nginx.org/en/docs/http/ngx_http_core_module.html

NGINX reverse proxy to docker applications

I am currently learning to set up nginx but I am already having an issue. There are gitlab and nextcloud running on my vps and both are accessible with the right port. Therefore I created a nginx config with a simple proxy_pass command but I always reveice 502 Bad Gateway.
Nextcloud, Gitlab and NGINX are docker container and NGINX has port 80 opened. The remaining two containers are having port 3000 and 3100 opened.
/etc/nginx/conf.d/gitlab.domain.com.conf
upstream gitlab {
server x.x.x.x:3000;
}
server {
listen 80;
server_name gitlab.domain.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://gitlab/;
}
}
/var/logs/error.log
2018/04/12 08:10:41 [error] 7#7: *1 connect() failed (113: Host is unreachable) while connecting to upstream, client: xx.201.226.19, server: gitlab.domain.com, request: "GET / HTTP/1.1", upstream: "http://xxx.249.7.15:3000/", host: "gitlab.domain.com"
2018/04/12 08:10:42 [error] 7#7: *1 connect() failed (113: Host is unreachable) while connecting to upstream, client: xx.201.226.19, server: gitlab.domain.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://xxx.249.7.15:3000/favicon.ico", host: "gitlab.domain.com", referrer: "http://gitlab.domain.com/
What is wrong with my configuration?
I think you could get away with a config way simpler than that.
Maybe something like this:
http {
...
server {
listen 80;
charset utf-8;
...
location / {
proxy_pass http://gitlab:3000;
}
}
}
I assume you are using docker's internal DNS for accessing the containers for example gitlab points to the gitlab containers internal IP. If that is the case then you can open up a container and try ping the gitlab container from the other container.
For example you can ping the gitlab container from the nginx container like this:
$ docker ps (use this to get the container id)
Now do:
$ docker exec -it <container_id_for_nginx_container> bash
# apt-get update -y
# apt-get install iputils-ping -y
# ping -c 2 gitlab
If you can't ping it then it means the containers have trouble communicating with each other. Are you using docker-compose? If you are then I would suggest look at the "links" keyword which is used to link containers that should be able to communicate with each other. So for example you would probably link the gitlab container to postgresql.
Let me know if this helps.
Another option that uses the advantage that your Docker containers are just processes in an isolated own control group is to bind each process (container) to a port on the host network (instead of an isolated network group). This bypasses Docker routing, so beware of the caveat that ports may not overlap on the host machine (no different than any normal process sharing the same host network.
You mentioned running Nginx and Nextcloud (I assume you are using the nextcloud fpm image because of FastCGI support). In this case, I had to do the following on my Arch Linux machine:
/usr/share/webapps/nextcloud is bounded (bind mounted) to the container at /var/www/html.
The UID of both host and container process must be the same (in my case, user host http and container www-data are UID=33)
The 443 server block in nginx.conf must set root to the host's nextcloud path, root /usr/share/webapps/nextcloud;.
The FastCGI script path for each server block that calls php-fpm over FastCGI must be adjusted to refer to the Docker container's Nextcloud base path, fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;. In other words, you cannot use $document_root as you normally would, because this points to the host's nextcloud root path.
Optional: Adjust paths to database and Redis in the config.php file to not use localhost, rather the hostname of the host machine. localhost seems to reference the container's host despite having been bound to the host machine's main network.

Resources