How to fix 502 Bad Gateway error in nginx? - docker

I have a server running docker-compose. Docker-compose has 2 services: nginx (as a reverse proxy) and back (as an api that handles 2 requests). In addition, there is a database that is not located on the server, but separately (database as a service).
Requests processed by back:
get('/api') - the back service simply replies "API is running" to it
get('/db') - the back service sends a simple query to an external database ('SELECT random() as random, current_database() as db')
request 1 - works fine, request 2 - the back service crashes, nginx continues to work and a 502 Bad Gateway error appears in the console.
An error occurs in the nginx service Logs: upstream prematurely
closed connection while reading response header from upstream.
The back service Logs: connection terminated due to connection timeout.
These are both rather vague errors. And I don’t know how else to get close to them, given that the code is not in a container, without Nginx and with the same database, it works as it should.
What I have tried:
increase the number of cores and RAM (now 2 cores and 4 GB of Ram);
add/remove/change proxy_read_timeout, proxy_send_timeout and proxy_connect_timeout parameters;
test the www.test.com/db request via postman and curl (fails with the same error);
run the code on your local machine without a container and compose and connect to the same database using the same pool and the same ip (everything is ok, both requests work and send what you need);
change the parameter worker_processes (tested with a value of 1 and auto);
add/remove attribute proxy_set_header Host $http_host, replace $http_host with "www.test.com".
Question:
What else can I try to fix the error and make the db request work?
My nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http{
upstream back-stream {
server back:8080;
}
server {
listen 80;
listen [::]:80;
server_name test.com www.test.com;
location / {
root /usr/share/nginx/html;
resolver 121.0.0.11;
proxy_pass http://back-stream;
}
}
}
My docker-compose.yml:
version: '3.9'
services:
nginx-proxy:
image: nginx:stable-alpine
container_name: nginx-proxy
ports:
- 80:80
- 443:443
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
- network
back:
image: "mycustomimage"
container_name: back
restart: unless-stopped
ports:
- '81:8080'
networks:
- network
networks:
network:
driver: bridge
I can upload other files if needed. Just taking into account the fact that the code does not work correctly in the container, the problem is rather in setting up the container.
I will be grateful for any help.
Code of the back: here

The reason for the error is this: I forgot to add my server's ip to the list of allowed addresses in the database cluster.

Related

How to use service name as URL in Docker?

What I want to do:
My docker-composer file contains several services, which each have a service name. All services use the exact same image. I want them to be reachable via the service name (like curl service-one, curl service-two) from the host. The use-case is that these are microservices that should be reachable from a host system.
services:
service-one:
container_name: container-service-one
image: customimage1
service-two:
container_name: container-service-two
image: customimage1
What's the problem
Lots of tutorials say that that's the way to build microservices, but it's usually done by using ports, but I need services names instead of ports.
What I've tried
There are lots of very old answers (5-6 years), but not a single one gives a working answer. There are ideas like parsing the IP of each container and then using that, or just using hostnames internally between docker containers, or complex third party tools like building your own DNS.
It feels weird that I'm the only one who needs several APIs reachable from a host system, this feels like standard use-case, so I think I'm missing something here.
Can somebody tell me where to go from here?
I'll start from basic to advanced as far as I know.
For starter, every service that's part of the docker network (by default everyone that's part of the compose file) so accessing each other by their service name is already there "for free".
If you want to use the service names from the host itself you can set a reverse proxy like nginx and by the server name (in your case would be equal to service name) route the appropriate port on the host running the docker containers.
the basic idea is to intercept all communication to port 80 on the server and send the communication by the incoming DNS name.
Here's an example:
compose file:
version: "3.9"
services:
nginx-router:
image: "nginx:latest"
volumes:
- type: bind
source: ./nginx.conf
target: /nginx/nginx.conf
ports:
- "80:80"
service1:
image: "nginx:latest"
ports:
- "8080:80"
service2:
image: "httpd:latest"
ports:
- "8081:80"
nginx.conf
worker_processes auto;
pid /tmp/nginx.pid;
events {
worker_connections 8000;
multi_accept on;
}
http {
server {
listen 80;
server_name 127.0.0.1;
location / {
proxy_set_header Host $host;
proxy_pass http://service1:80;
}
}
server {
listen 80;
server_name localhost;
location / {
proxy_set_header Host $host;
proxy_pass http://service2:80;
}
}
}
in this example if my server name is localhost I'm routing to service2 which is an httpd image or Apache HTTP and we're getting it works which is the default apache image HTML page:
and when we're accessing through 127.0.0.1 server name we should see nginx and indeed this is what we're getting:
in your case you'd use the service names instead after setting them as a DNS record and using this DNS record to route to the appropriate service.

Hosted server returning localhost/web instead

I have a website host on a server in Digital Ocean that is behaving weirdly.
The website is written in Flask which is deployed in Docker and using reverse proxy with a combination of Let's Encrypt to host on the web.
The website's domain is mes.th3pl4gu3.com.
If I go on mes.th3pl4gu3.com/web/ the website appears and works normal.
If I go on mes.th3pl4gu3.com/web it gives me http://localhost/web/ in the URl instead and conenction fails.
However, when I run it locally, it works fine.
I've checked my nginx logs, when i browse mes.th3pl4gu3.com/web/ the access_logs returns success but when i use mes.th3pl4gu3.com/web nothing comes to the log.
Does anyone have any idea what might be causing this ?
Below are some codes that might help in troubleshooting.
server {
server_name mes.th3pl4gu3.com;
location / {
access_log /var/log/nginx/mes/access_mes.log;
error_log /var/log/nginx/mes/error_mes.log;
proxy_pass http://localhost:9003; # The mes_pool nginx vip
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/..........
ssl_certificate_key /etc/letsencrypt/........
include /etc/letsencrypt/.........
ssl_dhparam /etc/letsencrypt/......
}
server {
if ($host = mes.th3pl4gu3.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name mes.th3pl4gu3.com;
return 404; # managed by Certbot
}
Docker Instances:
7121492ad994 docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:1.2.5 "uwsgi
app.ini" 4 weeks ago Up 4 weeks 0.0.0.0:9002->5000/tcp mes-instace-2
f4dc063e33b8 docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:1.2.5 "uwsgi app.ini" 4 weeks ago Up 4 weeks 0.0.0.0:9001->5000/tcp mes-instace-1
fb269ed2229a nginx "/docker-entrypoint.…" 4 weeks ago Up 4 weeks 0.0.0.0:9003->80/tcp nginx_mes
2ad5afe0afd1 docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:1.2.5 "uwsgi app.ini" 4 weeks ago Up 4 weeks 0.0.0.0:9000->5000/tcp mes-backup
docker-compose-instance.yml
version: "3.8"
# Contains all Production instances
# Should always stay up
# In case both instances fails, backup instance will takeover
services:
mes-instace-1:
container_name: mes-instace-1
image: "docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:${MES_VERSION}"
networks:
- mes_net
volumes:
- ./data:/app/data
env_file:
- secret.env
ports:
- "9001:5000"
restart: always
environment:
- MES_VERSION=${MES_VERSION}
mes-instace-2:
container_name: mes-instace-2
image: "docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:${MES_VERSION}"
networks:
- mes_net
volumes:
- ./data:/app/data
env_file:
- secret.env
ports:
- "9002:5000"
restart: always
environment:
- MES_VERSION=${MES_VERSION}
networks:
mes_net:
name: mes_network
driver: bridge
docker-compose.yml
version: "3.8"
# Contains the backup instance and the nginx server
# This should ALWAYS stay up
services:
mes-backup:
container_name: mes-backup
image: "docker.pkg.github.com/mervin16/mauritius-emergency-services-api/mes:${MES_VERSION}"
networks:
- mes_net
volumes:
- ./data:/app/data
env_file:
- secret.env
ports:
- "9000:5000"
restart: always
environment:
- MES_VERSION=${MES_VERSION}
nginx_mes:
image: nginx
container_name: nginx_mes
ports:
- "9003:80"
networks:
- mes_net
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ./log/nginx:/var/log/nginx
depends_on:
- mes-backup
restart: always
networks:
mes_net:
name: mes_network
driver: bridge
I have multiple instances for load balancing across apps.
Can someone please help or if anyone has any clue why this might be happening ?
As long as I tested the page https://mes.th3pl4gu3.com/web with or without / trailing at the end it worked fine. (Firefox version 87 on Ubuntu)
Maybe there is a bug / problem with your web browser or any kind of VPN / Proxy you are running. Make sure all of them are off.
Plus on Nginx you can get rid of trailing / using rewrite rule
e.g.
location = /stream {
rewrite ^/stream /stream/;
}
which tells Nginx parse stream as it is stream/
and for making sure you are not facing any issue because of cached data, disable and clear all the cache. On your web browser hit F12 -> go to console tab , hit F1 and there disable cache. On Nginx set "no cache" for header, e.g.
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
I tested your site with Chrome, Safari, and Curl, and I can't see that issue.
Try clear your cache.
Method 1: Ctrl-Shift-R
Method 2: DevTool -> Application/Storage -> Clear site data
My guess is it is related to your Flask SERVER_NAME
As you said, locally your SERVER_NAME might be set to localhost:8000.
However, on production, it would need to be something like
SERVER_NAME = "th3pl4gu3.com"
Your issue is that you are pulling the SERVER_NAME from the Flask SERVER_NAME variable, so it ends up as https://localhost/web instead of your desired URL.

nginx load balancer - Docker compose

I have a simple flask app running on port 5000 inside the container , and i'm trying to add nginx load balance to scale the app(3 instances)
Here is my docker-compose file :
version: "3.7"
services:
chat-server:
image: chat-server
build:
context: .
dockerfile: Dockerfile
volumes:
- './chat_history:/src/app/chat_history'
networks:
- "chat_net"
ngnix-server:
image: nginx:1.13
ports:
- "8080:80"
volumes:
- './ngnix.conf:/etc/ngnix/nginx.conf'
networks:
- "chat_net"
depends_on:
- chat-server
networks:
chat_net:
And here is my nginx.conf file :
events { worker_connections 1024;}
http {
upstream app {
server chat-server_1:5000;
server chat-server_2:5000;
server chat-server_3:5000;
}
}
server {
listen 80;
location / {
proxy_pass http://app;
}
}
both services are on the same chat_net network , but when i hit localhost:8080 on my browser im getting the nginx default page , why is that? what am i missing ?
You have a typo and are not mounting in your nginx.conf file correctly.
You spell it ngnix in a couple of places in your volumes section and the container runs with the default config (hence default home page).
Once you fix that, you will probably hit the error mentioned by #Federkun (nginx won't be able to resolve the 3 domain names you're proxying).
You also have your server directive in the wrong place (it needs to be within the http section).
This should be the modified version of your file:
events { worker_connections 1024;}
http {
upstream app {
server chat-server:5000;
}
server {
listen 80;
location / {
proxy_pass http://app;
}
}
}
Notice this is better than needing nginx to be aware of the replica count. You can run docker-compose up with --scale chat-server=N and resize at anytime by running the same command with a different N without downtime.
#Jedi
shouldn't be at least below?
events { worker_connections 1024;}
http {
upstream app {
least_conn;
server chat-server_1:5000;
server chat-server_2:5000;
}
server {
listen 80;
location / {
proxy_pass http://app;
}
}
}

nginx reverse-proxy simple config not redirecting

To get the hang of nginx with docker, I have a very simple nginx.conf file + docker-compose, running 2 containers for 1 service (service itself+db).
What I want:
localhost --> show static page
localhost/pics --> show another static page
localhost/wekan --> redirect to my container, which is running on port 3001.
the last part (redirect to docker-container) does not work. The app can be reached under localhost:3001, tho.
My nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
server{
listen 80;
location / {
root /home/user/serverTest/up1; #index.html is here
}
location /wekan {
proxy_pass http://localhost:3001;
rewrite ^/wekan(.*)$ $1 break; # this didnt help either
}
location /pics {
proxy_pass http://localhost/example.jpg;
}
location ~ \.(gif|jpg|png)$ {
root /home/user/serverTest/data/images;
}
}
docker-compose.yml:
version: '2'
services:
wekandb:
image: mongo:3.2.21
container_name: wekan-db
restart: always
command: mongod --smallfiles --oplogSize 128
networks:
- wekan-tier
expose:
- 27017
volumes:
- /home/user/wekan/wekan-db:/data/db
- /home/user/wekan/wekan-db-dump:/dump
wekan:
image: quay.io/wekan/wekan
container_name: wekan-app
restart: always
networks:
- wekan-tier
ports:
# Docker outsideport:insideport
- 127.0.0.1:3001:8080
environment:
- MONGO_URL=mongodb://wekandb:27017/wekan
- ROOT_URL=http://localhost
Looking at the nginx-error logs, I get this:
2018/12/17 11:57:16 [error] 9060#9060: *124 open() "/home/user/serverTest/up1/31fb090e9e6464a4d62d3588afc742d2e11dc1f6.js" failed (2: No such file or directory),
client: 127.0.0.1, server: ,
request: "GET /31fb090e9e6464a4d62d3588afc742d2e11dc1f6.js?meteor_js_resource=true HTTP/1.1", host: "localhost",
referrer: "http://localhost/wekan"
So I guess this makes sense because in my understanding, nginx is now adding the redirect to the root given # /, but clearly this is not where the container is running.
How do I prevent that?
Your nginx cannot access the local network interface of your docker composition.
Try to bind wekan's port like this:
wekan:
ports:
- 127.0.0.1:3001:8080
Mind the 127.0.0.1
See https://docs.docker.com/compose/compose-file/#ports
the problem was within the docker-compose configuration.
For anyone wondering, all you need is a proxy pass addr:port or addr:port/ whereas the 2nd option does the same as the rewrite part, so this can be skipped.
Apart from that, i had to add the /wekan into the ROOT_URL inside my docker-compose

Nginx proxy for multibranch environment

I am using nginx as a simple proxy service for my multiple dockerized containers (including image with nginx layer as well). I am trying to create vhost for each branch and this is causing a lot of trouble here. What i want to achieve is:
An nginx proxy service should proxy to containers on paths:
[branch_name].a.xyz.com (frontend container)
some-jenkins.xyz.com (another container)
some other containers not existing yet
nginx.conf inside proxy container:
upstream frontend-branch {
server frontend:80;
}
server {
listen 80;
server_name ~^(?<branch>.*)\.a\.xyz\.com;
location / {
proxy_pass http://frontend-branch;
}
}
nginx.conf inside frontend container:
server {
listen 80;
location / {
root /www/html/branches/some_default_branch
}
}
server {
listen 80;
location ~^/(?<branch>.*)$ {
root /www/html/branches/$branch
}
}
docker-compose for proxy:
version: "2.0"
services:
proxy:
build: .
ports:
- "80:80"
restart: always
networks:
default:
external:
name: nginx-proxy
Inside frontend project it looks pretty much the same, except service name and ofc ports (81:80).
Is there any way to "pass" the branch as a path for frontend container (e.g. some frontend:80/$branch) ?
Is it even possible to create that kind of proxy? I don't want to use the same image based on nginx as a proxy and as a 'frontend keeper' because in the future I will want to use proxy for more than only one container so having configuration for whole site proxy inside frontend project would be weird.
Cheers

Resources