Nginx proxy and docker 502 bad gateway - docker

I have two Docker containers on the same network. One of them is a Spring Boot server app, the other is a React client app. I'm trying to get the client to make AJAX calls to the server. When I run them both locally on my machine, outside of Docker, everything works. When I run them with my docker configuration and using an Nginx proxy, I get 502 bad gateway errors.
Here is my docker-compose configuration:
version: '3'
video-server:
build:
context: .
dockerfile: video-server_Dockerfile
container_name: video-server
networks:
- videoManagerNetwork
environment:
- VIDEO_MANAGER_DIR=/opt/videos
volumes:
- ${VIDEO_MANAGER_DIR_PROD}:/opt/videos
video-client:
build:
context: .
dockerfile: video-client_Dockerfile
container_name: video-client
networks:
- videoManagerNetwork
ports:
- 9000:80
networks:
videoManagerNetwork:
As you can see, both containers are given explicit names and are on the same network. video-client is the Nginx React app, video-server is the Spring Boot app.
Here is my Nginx config:
worker_processes auto;
events {
worker_connections 8000;
multi_accept on;
}
http {
log_format compression '$remote_addr - $remote_user [$time_local] '
'"$request" $status $upstream_addr '
'"$http_referer" "$http_user_agent"';
include /etc/nginx/mime.types;
default_type text/plain;
server {
listen 80;
# TODO make sure the log is written to a docker volume
access_log /var/log/nginx/access.log compression;
root /var/www;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /api/ {
proxy_set_header Host $http_host;
proxy_pass http://video-server:8080/api/;
}
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
location ~* \.(?:css|js)$ {
try_files $uri =404;
expires 1y;
access_log off;
add_header Cache-Control "public";
}
location ~ ^.+\..+$ {
try_files $uri =404;
}
}
}
As you can see, I'm proxying all calls to /api/ to my video-server container. This should be working. I even shelled into the video-client container docker exec -it video-client bash, installed curl, and was able to successfully make calls to the other container, ie http://video-server:8080/api/categories.
I'm looking for suggestions about what the problem with my configuration could be. I'm not particularly experienced with Nginx, so I'm assuming I'm doing something wrong there.
Edit
I finally figured out what was necessary to make this work. I would still be interested to understand why this helps.
I added the following lines to the "http" section of the Nginx config, and the problem was solved:
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
So it looks like this changed the buffer and timeout settings. Why did this help?

Related

Can't connect from one docker container to another by its public domain name

I have an application composed of containerized web-services deployed with docker-compose (it's a test env). One of the containers is nginx that operates as a reverse proxy for services and also serves static files. A public domain name points to the host machine and nginx has a server section that utilizes it.
The problem I am facing is that I can't talk to nginx by that public domain name from the containers launched on this same machine - connection always timeouts. (For example, I tried doing a curl https://<mypublicdomain>.com)
Referring by the containers name (using docker's hostnames) works just fine. Reuqests to the same domain name from other machines also work ok.
I understand this has to do with how docker does networking, but fail to find any docs that would outline what exactly goes wrong here. Could anyone explain the root of the issue to me or maybe just point in the right direction?
(For extra context: originally I was going to use this to set up monitoring with prometheus and blackbox exporter to make it see the server the same way anyone from the outside would do + to automatically check that SSL is working. For now I pulled back to point the prober to nginx by its docker hostname)
Nginx image
FROM nginx:stable
COPY ./nginx.conf /etc/nginx/nginx.conf.template
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
COPY ./dhparam/dhparam-2048.pem /dhparam-2048.pem
COPY ./index.html /var/www/index.html
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
docker-compose.yaml
version: "3"
networks:
mainnet:
driver: bridge
services:
my-gateway:
container_name: my-gateway
image: aturok/manuwor_gateway:latest
restart: always
networks:
- mainnet
ports:
- 80:80
- 443:443
expose:
- "443"
volumes:
- /var/stuff:/var/www
- /var/certs:/certsdir
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
(I show only the nginx service, as others are irrelevant - I would for example spin up a nettools container and not connect it to the mainnet network - still expect the requests to reach nginx, since I am using the public domain name. The problem also happens with the containers connected to the same network)
nginx.conf (normally it comes with a bunch of env vars, replaced + removed irrelevant backend)
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
#include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name mydomain.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name mydomain.com;
ssl_certificate /certsdir/fullchain.pem;
ssl_certificate_key /certsdir/privkey.pem;
server_tokens off;
ssl_buffer_size 8k;
ssl_dhparam /dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
root /var/www/;
index index.html;
location / {
root /var/www;
try_files $uri /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
Note: certificates are ok when I access the server from elsewhere

Docker Nginx Reverse Proxy for Protection of Docker Container

I have two docker services (an angular web-app and a tomcat backend), which I want to protect with a third docker service, which is an nginx configured as a reverse-proxy. My proxy configuration is working, but I'm suffering with the basic authorization my reverse-proxy should also handle. When I protect my angular frontend service with basic auth via reverse-proxy config, everything works fine, but my backend is still exposed for everyone. When I add also basic auth to the backend service, I have the problem, that my basic auth configuration header from my frontend is not forwarded/added to the backend REST requests. Is it possible to configure the nginx reverse proxy to add the Authorization header to each request send by the frontend. Or maybe I'm thinking wrong and there is a better solution?
Here is my docker and nginx configuration:
reverse-proxy config:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream docker-nginx {
server frontend-nginx:80;
}
upstream docker-tomcat {
server backend-tomcat:8080;
}
map $upstream_http_docker_distribution_api_version $docker_distribution_api_version {
'' 'registry/2.0';
}
server {
listen 80;
location / {
auth_basic "Protected area";
auth_basic_user_file /etc/nginx/conf.d/nginx.htpasswd;
add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;
proxy_pass http://docker-nginx;
proxy_redirect off;
}
}
server {
listen 8080;
location / {
auth_basic "Protected area";
auth_basic_user_file /etc/nginx/conf.d/nginx.htpasswd;
add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;
proxy_pass http://docker-tomcat;
proxy_redirect off;
}
}
}
docker-compose (setting up all containers):
version: '2.4'
services:
reverse-proxy:
container_name: reverse-proxy
image: nginx:alpine
volumes:
- ./auth:/etc/nginx/conf.d
- ./auth/nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- "80:80"
- "8080:8080"
restart: always
links:
- registry:registry
frontend-nginx:
container_name: frontend
build: './frontend'
volumes:
- /dockerdev/frontend/dist/:/usr/share/nginx/html
depends_on:
- reverse-proxy
- bentley-tomcat
restart: always
backend-tomcat:
container_name: backend
build: './backend'
volumes:
- /data:/data
depends_on:
- reverse-proxy
restart: always
registry:
image: registry:2
ports:
- 127.0.0.1:5000:5000
volumes:
- ./data:/var/lib/registry
frontend Dockerfile:
FROM nginx
COPY ./dist/ /usr/share/nginx/html
COPY ./fast-nginx-default.conf /etc/nginx/conf.d/default.conf
frontend config:
server {
listen 80;
sendfile on;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 256;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ /index.html =404;
}
}
backend Dockerfile:
FROM openjdk:11
RUN mkdir -p /usr/local/bin/tomcat
COPY ./backend-0.0.1-SNAPSHOT.jar /usr/local/bin/tomcat/backend-0.0.1-SNAPSHOT.jar
WORKDIR /usr/local/bin/tomcat
CMD ["java", "-jar", "backend-0.0.1-SNAPSHOT.jar"]
Try adding this directives to your location block
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
I've solved my issue, by listining on port 80 for request with /api and redirected them to the tomcat on port 8080. For that I also had to adjust my front- and backend requests, now all my backend request begin with /api. By this solution I'm able to implement the basic auth on port 80 for protecting the front- and backend.
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
client_max_body_size 25M;
upstream docker-nginx {
server frontend-nginx:80;
}
upstream docker-tomcat {
server backend-tomcat:8080;
}
server {
listen 80;
location /api {
proxy_pass http://docker-tomcat;
}
location / {
auth_basic "Protected area";
auth_basic_user_file /etc/nginx/conf.d/nginx.htpasswd;
proxy_pass http://docker-nginx;
proxy_redirect off;
}
}
}

Nginx as reverse proxy server for Nexus - can't connect in docker environment

I have environment builded upon docker containers (in boot2docker). I have following docker-compose.yml file to quickly setup nginx and nexus servers :
version: '3.2'
services:
nexus:
image: stefanprodan/nexus
container_name: nexus
ports:
- 8081:8081
- 5000:5000
nginx:
image: nginx:latest
container_name: nginx
ports:
- 5043:443
volumes:
- /opt/dm/nginx2/nginx.conf:/etc/nginx/nginx.conf:ro
Nginx has following configuration (nginx.conf)
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
proxy_send_timeout 120;
proxy_read_timeout 300;
proxy_buffering off;
keepalive_timeout 5 5;
tcp_nodelay on;
server {
listen 80;
server_name demo.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name demo.com;
# allow large uploads of files - refer to nginx documentation
client_max_body_size 1024m;
# optimize downloading files larger than 1G - refer to nginx doc before adjusting
#proxy_max_temp_file_size 2048m
#ssl on;
#ssl_certificate /etc/nginx/ssl.crt;
#ssl_certificate_key /etc/nginx/ssl.key;
location / {
proxy_pass http://nexus:8081/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
}
}
Nexus seems to work very well. I call sucessfully curl http://localhost:8081 on docker host machine. This return me html of nexus login site. Now I want to try nginx server. It is configured to listen on 443 port, but SSL is right now disabled (I wanted to test it before diving into SSL configuration). As you can notice, my ngix container maps port 443 to port 5043. Thus, I try to use following curl command : curl -v http://localhost:5043/. Now I expect that my http request is going to be send to nginx and proxied to proxy_pass http://nexus:8081/; nexus. Nexus hostname is visible within docker container network and is accesible from nginx container. Unfortunately in reponse I receive :
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 5043 (#0)
> GET / HTTP/1.1
> Host: localhost:5043
> User-Agent: curl/7.49.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
I was checking nginx logs, error, access but these logs are empty. Can somebody help me solving this problem ? It should be just a simple example of proxying requests, but maybe I misunderstand some concept ?
Do you have an upstream directive in your nginx conf (placed within the http directive)?
upstream nexus {
server <Nexus_IP>:<Nexus_Port>;
}
Only then nginx can correctly resolve it. The docker-compose service name nexus is not injected to the nginx container on runtime.
You can try links in docker-compose:
https://docs.docker.com/compose/compose-file/#links
This gives you an alias for the linked container in your /etc/hosts. But you still need an upstream directive. Update: If resolvable, you can as well use the names directly in nginx directives like location.
https://serverfault.com/questions/577370/how-can-i-use-environment-variables-in-nginx-conf
As #arnold's answer you are missing the upstream configuration in your nginx. I saw you are using the stefanprodan nexus image, see his blog for the full configuration. Below you can find mine (remember to open ports 8081 and 5000 of nexus even the entrance point is the 443). Besides you need to include the certificate because docker client requires ssl working:
worker_processes 2;
events {
worker_connections 1024;
}
http {
error_log /var/log/nginx/error.log warn;
access_log /dev/null;
proxy_intercept_errors off;
proxy_send_timeout 120;
proxy_read_timeout 300;
upstream nexus {
server nexus:8081;
}
upstream registry {
server nexus:5000;
}
server {
listen 80;
listen 443 ssl default_server;
server_name <yourdomain>;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
ssl_certificate /etc/letsencrypt/live/<yourdomain>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<yourdomain>/privkey.pem;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
keepalive_timeout 5 5;
proxy_buffering off;
# allow large uploads
client_max_body_size 1G;
location / {
# redirect to docker registry
if ($http_user_agent ~ docker ) {
proxy_pass http://registry;
}
proxy_pass http://nexus;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
}
}
The certificates are generated using letsencrypt or certbot. The rest of the configuration is to have an A+ in ssllabs analysis as it is explained here
Your docker-compose of 5000 port is an dynamic port(Because it hadn't been exposed ) , so you cannot connect the 5000 port because the
ports:
- 8081:8081
- 5000:5000
are not efficient .
you can use like this:
Build a new Dockerfile and expose 5000 port (Mine name is )
FROM sonatype/nexus3:3.16.2
EXPOSE 5000```
Use new image to start the container and publish the port .
version: "3.7"
services:
nexus:
image: 'feibor/nexus:3.16.2-1'
deploy:
placement:
constraints:
- node.hostname == node1
restart_policy:
condition: on-failure
ports:
- 8081:8081/tcp
- 5000:5000/tcp
volumes:
- /mnt/home/opt/nexus/nexus-data:/nexus-data:z

Nginx reverse proxy to .Net Core API in docker

I'm having trouble trying to get the following to work in Docker
What I want is that when the user requests http://localhost/api then NGINX reverse proxies to my .Net Core API running in another container.
Container Host: Windows
Container 1: NGINX
dockerfile
FROM nginx
COPY ./nginx.conf /etc/nginx/nginx.conf
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
location /api1 {
proxy_pass http://api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Container 2: .Net Core API
Dead simple - API exposed on port 80 in the container
Then there is the docker-compose.yml
docker-compose.yml
version: '3'
services:
api1:
image: api1
build:
context: ./Api1
dockerfile: Dockerfile
ports:
- "5010:80"
nginx:
image: vc-nginx
build:
context: ./infra/nginx
dockerfile: Dockerfile
ports:
- "5000:80"
Reading the Docker documentation it states:
Links allow you to define extra aliases by which a service is
reachable from another service. They are not required to enable
services to communicate - by default, any service can reach any other
service at that service’s name.
So as my API service is called api1, I've simply referenced this in the nginx.conf file as part of the reverse proxy configuration:
proxy_pass http://api1;
Something is wrong as when I enter http:\\localhost\api I get a 404 error.
Is there a way to fix this?
The problem is the nginx location configuration.
The 404 error is right, because your configuration is proxying request from http://localhost/api/some-resource to a missing resource, because your mapping is for /api1 path and you're asking for /api.
So you should only change the location to /api and it will work.
Keep in mind that requests to http://localhost/api will be proxied to http://api1/api (the path is kept). If your backend is configured to expose api with a prefixing path this is ok, otherwise you will receive another 404 (this time from your service).
To avoid this you should rewrite the path before proxying the request with a rule like this:
# transform /api/some-resource/1 to /some-resource/1
rewrite /api/(.*) /$1 break;

Docker and NGINX - host not found in upstream when building with docker-compose

I am attempting to use an NGINX container to host a static web application. This container should also redirect certain requests (i.e. www.example.com/api/) to another container on the same network.
I am getting the "host not found in upstream" issue when calling docker-compose build, even though I am enforcing that the NGINX container is the last to be built.
I have tried the following solutions:
Enforcing a network name and aliases (as per Docker: proxy_pass to another container - nginx: host not found in upstream)
Adding a "resolver" directive (as per Docker Networking - nginx: [emerg] host not found in upstream and others), both for 8.8.8.8 and 127.0.0.11.
Rewriting the nginx.conf file to have the upstream definition before the location that will redirect to it, or after it.
I am running on a Docker for Windows machine that is using a mobylinux VM to run the relevant container(s). Is there something I am missing? It isn't obvious to me that the "http://webapi" address should resolve correctly, as the images are built but not running when you are calling docker-compose.
nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
upstream docker-webapi {
server webapi:80;
}
server {
listen 80;
server_name localhost;
location / {
root /wwwroot/;
try_files $uri $uri/ /index.html;
}
location /api {
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://docker-webapi;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
docker-compose:
version: '3'
services:
webapi:
image: webapi
build:
context: ./src/Api/WebApi
dockerfile: Dockerfile
volumes:
- /etc/example/secrets/:/app/secrets/
ports:
- "61219:80"
model.api:
image: model.api
build:
context: ./src/Services/Model/Model.API
dockerfile: Dockerfile
volumes:
- /etc/example/secrets/:/app/secrets/
ports:
- "61218:80"
webapp:
image: webapp
build:
context: ./src/Web/WebApp/
dockerfile: Dockerfile
ports:
- "80:80"
depends_on:
- webapi
Dockerfile:
FROM nginx
RUN mkdir /wwwroot
COPY nginx.conf /etc/nginx/nginx.conf
COPY wwwroot ./wwwroot/
EXPOSE 80
RUN service nginx start
You issue is below
RUN service nginx start
You never run service command inside docker, because there is no init system. Also RUN commands are run during build time.
nginx original image has everything you need for nginx to start fine. So just remove the line and it would work.
By default nginx image has below CMD instruction
CMD ["nginx" "-g" "daemon off;"]
You can easily find that out by running the below command
docker history --no-trunc nginx | grep CMD

Resources