I have a golang application that works well locally (I send requests and get the correct responses). But when I try to run this application in containers with nginx proxy server, I get a 502 error for all requests:
[error] 31#31: *1 connect() failed (111: Connection refused) while
connecting to upstream, client: 172.20.0.1, server:
polygon.application.local, request: "GET /v1/credits HTTP/1.1",
upstream: "http://172.20.0.5:8080/v1/credits", host:
"polygon.application.local"
I have tried different solutions from Google to fix this problem, but have not fixed it yet.
There are my configs:
docker-compose.yaml
version: "3.9"
services:
nginx:
image: nginx:alpine
volumes:
- ${PROJECT_DIR}/deployments/nginx.conf:/etc/nginx/conf.d/default.conf:delegated
- ${PROJECT_DIR}/configs/ssl:/etc/nginx/ssl/:delegated
ports:
- "80:80"
- "443:443"
swagger_ui:
image: swaggerapi/swagger-ui
environment:
SWAGGER_JSON: /spec/api.swagger.yaml
volumes:
- ${PROJECT_DIR}/api/openapi-spec/api.swagger.yaml:/spec/api.swagger.yaml
credit_server:
build:
context: ..
dockerfile: ${PROJECT_DIR}/deployments/Dockerfile
args:
BUILD_APP_NAME: credit-server
depends_on:
credit_service_db:
condition: service_healthy
credit_service_db:
image: mysql:8.0
container_name: credit_service_db
restart: always
environment:
MYSQL_DATABASE: credit_service
MYSQL_USER: credit_service
MYSQL_PASSWORD: credit_service
MYSQL_ROOT_PASSWORD: credit_service
ports:
- '3306:3306'
expose:
- '3306'
healthcheck:
test: [ 'CMD-SHELL', 'mysqladmin ping -h localhost' ]
interval: 5s
timeout: 20s
retries: 10
nginx.conf
map $microservice $upstream {
credits credit_server:8080;
swagger swagger_ui:8080;
}
server {
listen 443 http2 ssl;
server_name polygon.application.local;
server_tokens off;
client_max_body_size 16m;
root /dev/null;
resolver 127.0.0.11 valid=30s;
ssl_certificate /etc/nginx/ssl/crt.pem;
ssl_certificate_key /etc/nginx/ssl/private.key.pem;
location / {
set $microservice "swagger";
proxy_pass http://$upstream;
}
location ~ ^/v1/(?<microservice>[\w\-]+) {
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header X-Frame-Options SAMEORIGIN;
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET,POST,PUT,DELETE,OPTIONS,PATCH';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Access-Control-Allow-Headers,Access-Control-Allow-Origin,Authorization';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
add_header 'X-Microservice' '$microservice';
add_header 'X-Proxy-Pass' '$upstream';
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET,POST,PUT,DELETE,OPTIONS,PATCH';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Access-Control-Allow-Headers,Access-Control-Allow-Origin,Authorization';
proxy_pass http://$upstream;
}
}
Dockerfile for the credit_server
FROM alpine:latest
ARG BUILD_APP_NAME
ENV PROJECT_DIR=/go
RUN apk add tzdata
COPY ./build/${BUILD_APP_NAME} ${PROJECT_DIR}/bin/app
COPY ./configs ${PROJECT_DIR}/configs
COPY ./internal/migrations ${PROJECT_DIR}/migrations
CMD ${PROJECT_DIR}/bin/app -c ${PROJECT_DIR}/configs/config.yml -m container -p ${PROJECT_DIR}/migrations/
All containers start and work without errors.
I also send my requests from swagger and postman
The error you are getting is coming from NGINX, looks like it is finding the correct container to talk to, but nothing is listening on port 8080 in that container. It could be because you have misconfigured the listening port or could be because the application takes some time to start up, so it is not accepting connections at the time NGINX is trying to connect.
Try putting depends_on declarations into nginx section
depends_on:
credit_server:
condition: service_healthy
swagger:
condition: service_healthy
By default, my rest-host for the golang application was 127.0.0.1 or just localhost. But in nginx we specify listen 80, it means 0.0.0.0:80, and when I rewrote my host to 0.0.0.0, it will work!
Related
I am using docker-compose to build containers and to serve the frontend of my website at https:// example.com and the backend at a subdomain, https:// api.example.com. The SSL certificates for both the root and subdomain are working properly, and I can access the live site (static files served by Nginx) at https:// example.com so at least half of the configuration is working properly. The problem occurs when the frontend tries to communicate with the backend. All calls are met with a "No 'Access-Control-Allow-Origin'" 502 Error in the console logs. In the logs of the docker container, this is the error response.
Docker Container Error
2022/03/09 19:01:21 [error] 30#30: *7 connect() failed (111: Connection refused) while connecting
to upstream, client: xxx.xx.xxx.xxx, server: api.example.com, request: "GET /api/services/images/
HTTP/1.1", upstream: "http://127.0.0.1:8000/api/services/images/",
host: "api.example.com", referrer: "https://example.com/"
I think it's likely that something is wrong with my Nginx or docker-compose configuration. When setting the SECURE_SSL_REDIRECT, SECURE_HSTS_INCLUDE_SUBDOMAINS, and the SECURE_HSTS_SECONDS to False or None (in the Django settings) I am able to hit http:// api.example.com:8000/api/services/images/ and get the data I am looking for. So it is running and hooked up, just not taking requests from where I want it to be. I've attached the Nginx configuration and the docker-compose.yml. Please let me know if you need more info, I would greatly appreciate any input, and thanks in advance for the help.
Nginx-custom.conf
# Config for the frontend application under example.com
server {
listen 80;
server_name example.com www.example.com;
if ($host = www.example.com) {
return 301 https://$host$request_uri;
}
if ($host = example.com) {
return 301 https://$host$request_uri;
}
return 404;
}
server {
server_name example.com www.example.com;
index index.html index.htm;
add_header Access-Control-Allow-Origin $http_origin;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Headers $http_access_control_request_headers;
add_header Access-Control-Allow-Methods $http_access_control_request_method;
location / {
root /usr/share/nginx/html;
try_files $uri /index.html =404;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
##### Config for the backend server at api.example.com
server {
listen 80;
server_name api.example.com;
return 301 https://$host$request_uri;
}
server {
server_name api.example.com;
add_header Access-Control-Allow-Origin $http_origin;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Headers $http_access_control_request_headers;
add_header Access-Control-Allow-Methods $http_access_control_request_method;
location / {
proxy_pass http://127.0.0.1:8000/; #API Server
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
Docker-Compose File
version: '3.9'
# services that make up the development env
services:
# DJANGO BACKEND
backend:
container_name: example-backend
restart: unless-stopped
image: example-backend:1.0.1
build:
context: ./backend/src
dockerfile: Dockerfile
command: gunicorn example.wsgi:application --bind 0.0.0.0:8000
ports:
- 8000:8000
environment:
- SECRET_KEY=xxx
- DEBUG=0
- ALLOWED_HOSTS=example.com,api.example.com,xxx.xxx.xxx.x
- DB_HOST=postgres-db
- DB_NAME=xxx
- DB_USER=xxx
- DB_PASS=xxx
- EMAIL_HOST_PASS=xxx
# sets a dependency on the db container and there should be a network connection between the two
networks:
- db-net
- shared-network
links:
- postgres-db:postgres-db
depends_on:
- postgres-db
# POSTGRES DATABASE
postgres-db:
container_name: postgres-db
image: postgres
restart: always
volumes:
- example-data:/var/lib/postgresql/data
ports:
- 5432:5432
environment:
- POSTGRES_DB=exampledb
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
networks:
- db-net
# ANGULAR & NGINX FRONTEND
frontend:
container_name: example-frontend
build:
context: ./frontend
ports:
- "80:80"
- "443:443"
networks:
- shared-network
links:
- backend
depends_on:
- backend
networks:
shared-network:
driver: bridge
db-net:
volumes:
example-data:
I have a NAS behind a router. On this NAS I want to run for testing Nextcloud and Seafile together. Everything should be set up with docker. The jwilder/nginx-proxy container does no work as expected and I cannot find helpful information. I feel I am missing something very basic.
What is working:
I have a noip.com DynDNS that points to my routers ip: blabla.ddns.net
The router forwards ports 22, 80 and 443 to my NAS at 192.168.1.11
A plain nginx server running on the NAS can be accessed via blabla.ddns.net, its docker-compose.yml is this:
version: '2'
services:
nginxnextcloud:
container_name: nginxnextcloud
image: nginx
restart: always
ports:
- "80:80"
networks:
- web
networks:
web:
external: true
What is not working:
The same nginxserver like above behind the nginx-proxy. I cannot access this server. Calling blabla.ddns.net gives a 503 error, calling nextcloud.blabla.ddns.net gives "page not found". Viewing the logs of the nginx-proxy via docker logs -f nginxproxy logs every test with blabla.ddns.net and shows its 503 answer, but when I try to access nextcloud.blabla.ddns.net not even a log entry occurs.
This is the docker-compose.yml for one nginx behind a nginx-proxy:
version: '2'
services:
nginxnextcloud:
container_name: nginxnextcloud
image: nginx
restart: always
expose:
- 80
networks:
- web
environment:
- VIRTUAL_HOST=nextcloud.blabla.ddns.net
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginxproxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- web
networks:
web:
external: true
The generated configuration file for nginx-proxy /etc/nginx/conf.d/default.conf contains entries for my test server:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# nextcloud.blabla.ddns.net
upstream nextcloud.blabla.ddns.net {
## Can be connected with "web" network
# nginxnextcloud
server 172.22.0.2:80;
}
server {
server_name nextcloud.blabla.ddns.net;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://nextcloud.blabla.ddns.net;
}
}
Why is this minimal example not working?
I want to test different subdomains locally, using nginx and docker-compose.
docker-compose.yml:
version: '2'
services:
...
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: unless-stopped
ports:
- 8081:80
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: p4ssw0rd!
...
nginx:
build: ./backend/nginx
links:
- phpmyadmin
ports:
- "4000:80"
volumes:
- "./backend/nginx/nginx.conf:/etc/nginx/nginx.conf"
nginx.conf:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream docker-phpmyadmin {
server phpmyadmin:8081;
}
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://docker-phpmyadmin;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Nginx Dockerfile:
FROM nginx:alpine
COPY ./nginx.conf /etc/nginx/nginx.conf
etc/hosts:
127.0.0.1 example.com
127.0.0.1 api.example.com
127.0.0.1 admin.example.com
When I run my nginx container and I navigate to api.example.com:4000 on my browser I see a 502 Bad Gateway page, and inside the container I get this message:
nginx_1 | 2019/07/27 12:17:00 [error] 6#6: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.21.0.1, server: api.example.com, request: "GET / HTTP/1.1", upstream: "http://172.21.0.4:8081/", host: "api.example.com:4000"
I guess that it should work using the port 80 instead of the 4000 but how can I test my configuration locally?
Thanks
I was able to fix it by changing my upstream server port to 80:
upstream docker-phpmyadmin {
server phpmyadmin;
}
Trying to put together a compose setup using an nginx proxy to couchdb.
I can get this to work swimmingly well in a VM environment, but I can't get it to work in docker - with curl on the host to localhost:8085, I get '(52) Empty reply from server'. Nothing is showing up in the log output, nor is anything showing in the nginx access/error logs in the container.
Here is my docker-compose file:
version: "3"
services:
db.list-2-list:
container_name: db.list-2-list.lcldev
build: ./db.list-2-list/dev
ports:
- "8085:80"
networks:
- backend
- frontend
volumes:
- ./db.list-2-list/dev/conf.d:/etc/nginx/conf.d
couchdb:
container_name: "aph-couchdb"
image: "aph/couchdb"
networks:
- backend
volumes:
- cb-combined:/opt/couchdb/data
networks:
backend:
frontend:
volumes:
cb-combined:
external:
name: "cb-combined"
The 'aph-coucdb' container is derived from the apache couchdb image - It just contains my local.ini file (substituting the apache couchdb image with default config shows the same results.)
Port 5984 is exposed in the base image, so I'm not exposing it here (although I have tried that.) If I add a ports entry to the couchdb service exporting 5984:5984, then I can talk to the couchdb container directly, and all seems fine with it. I can also exec into the nginx container, and ping/curl aph-couchdb with expected results.
Here's my nginx config:
server {
listen 80 default;
# listen [::]:80;
location / {
proxy_pass aph-couchdb:5984;
proxy_redirect off;
# proxy_set_header Host $host;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Host $server_name;
# proxy_hide_header X-Powered-By;
if ($request_method = OPTIONS ) {
add_header 'Access-Control-Allow-Origin' '$http_origin';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, PUT, POST, HEAD, DELETE, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'X-Auth-CouchDB-UserName,X-Auth-CouchDB-Roles,X-Auth-CouchDB-Token,Accept,Authorization,Origin,Referer,X-Csrf-Token,DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
add_header 'Access-Control-Max-Age' 86400;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
}
location ~ ^/(.*)/_changes {
proxy_pass aph-couchdb:5984;
}
}
The commented out entries are various things I've tried based on other research - none make a difference - I still get the (52) Empty reply from server when I curl localhost:8085 from the host.
Can't seem to get any further on this...
Thoughts appreciated.
rickb
I am trying to develop a distributed Angular app deployed on Nginx that should connect to a backend service.
docker-compose.yml:
version: '3'
services:
backend_service_1:
build:
context: ./app
dockerfile: Dockerfile
ports:
- "3001:5000"
networks:
- my-network
frontend:
build:
context: ./frontend
dockerfile: Dockerfile.3
ports:
- "3000:80"
networks:
- my-network
links:
- backend_service_1
networks:
my-network:
nginx.conf:
upstream backend {
server backend_service_1:3001;
}
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html/ki-poc;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
location /backend {
proxy_pass http://backend/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
I can access the app on localhost:3000. I can also get a response from the backend service on localhost:3001 using the browser. However, when I try to get a response from the backend service using the proxy on localhost:3000/backend I receive the following error message:
[error] 5#5: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 172.20.0.1, server: localhost, request: "GET /backend HTTP/1.1", upstream: "http://172.20.0.2:3001/", host: "localhost:3000"
Can you tell my, why the request to the linked backend container is getting refused?
You shoul use the port of the container in the nignx config, not the one of the host.
upstream backend {
server backend_service_1:5000;
}