Does anyone see what I did wrong with my Nginx Reverse Proxy? I am getting a 502 Bad Gateway and I can't seem to figure out where my ports are wrong.
Nginx
/etc/nginx/sites-enabled/default
upstream reverse_proxy {
server 35.237.158.31:8080;
}
server {
listen 80;
server_name 35.237.158.31;
location / {
proxy_pass http://reverse_proxy;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
/etc/nginx/sites-enabled/jesse.red [VHOST]
upstream jessered {
server 127.0.0.1:2600; # <-- PORT 2600
}
server {
server_name jesse.red;
#root /var/www/jesse.red/;
# ---------------------------------------------------------------
# Location
# ---------------------------------------------------------------
location / {
proxy_pass http://jessered;
#proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 90;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/jesse.red/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/jesse.red/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = jesse.red) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name jesse.red;
listen 80;
return 404; # managed by Certbot
}
Docker
Below it's running on 2600
$ docker ps
9d731afed500 wordpress:php7.0-fpm-alpine "docker-entrypoint.s…" 3 days ago Up 17 hours 9000/tcp, 0.0.0.0:2600->80/tcp jesse.red
/var/www/jesse.red/docker-compose.yml
version: '3.1'
services:
jessered:
container_name: jesse.red
image: wordpress:4-fpm-alpine
restart: always
ports:
- 2600:80 # <-- PORT 2600
env_file:
- ./config.env # Contains .gitignore params
Testing Docker
docker-compose logs
Attaching to jesse.red
jesse.red | WordPress not found in /var/www/html - copying now...
jesse.red | Complete! WordPress has been successfully copied to /var/www/html
jesse.red | [03-Jul-2018 11:15:07] NOTICE: fpm is running, pid 1
jesse.red | [03-Jul-2018 11:15:07] NOTICE: ready to handle connections
System
$ ps aux | grep 2600
Below, port 2600 is in use.
root 1885 0.0 0.1 232060 3832 ? Sl Jul02 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 2600 -container-ip 172.20.0.2 -container-port 80
I'm not sure what went wrong, any help is really appreciated. I have scoured many places and haven't figured it out before asking.
Nginx request processing chooses a server block like this:
Check listen directive for IP:port exact matches, if no matches then check for IP OR port matches. IP addresses with no port are considered to be port 80.
From those matches it then checks the Host header of the request looking to match a server_name directive in the matched blocks. If it finds a match then that server handles the request, if not then assuming no default_server directive is set the request will be passed to the server listed first in your config.
So you have server_name 35.237.158.31; on port 80, and server_name jesse.red; also on port 80
IP addresses should be part of the listen directive, not the server_name, although this might match for some requests. Assuming this is being accessed from the outside world it's unlikely jesse.red will be in anyone's host headers.
Assuming no matches then it's going to get passed to whatever server Nginx finds first with a port match, I'm assuming Nginx will work alphabetically when including files, so your configs will load like this:
/etc/nginx/sites-enabled/default
/etc/nginx/sites-enabled/jesse.red
and now all your requests on port 80 with no host match, or with the ip address in the host field are getting proxied to:
upstream reverse_proxy {
server 35.237.158.31:8080;
}
That's my guess anyway, your Nginx logs will probably give you a fairly definitive answer.
Related
there. I have a docker nginx reverse proxy configured with ssl by configuration like this:
server {
listen 80;
server_name example.com;
location / {
return 301 https://$host$request_uri;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name example.com;
root /var/www/example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# some locations location ... { ... }
}
The certificates are configured with certbot and working pretty fine.
All the containers are up and running. But I have multiple websites running on my server. They are managed by local NginX. So I set ports for the docker NginX like this:
nginx:
image: example/example_nginx:latest
container_name: example_nginx
ports:
- "8123:80"
- "8122:443"
volumes:
- ${PROJECT_ROOT}/data/certbot/conf:/etc/letsencrypt
- ${PROJECT_ROOT}/data/certbot/www:/var/www/certbot
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
The docker port 80 maps to local port 8123 (http). The docker port 443 maps to local port 8122 (https). To pass the request from local NginX to docker container NginX I use the following config:
server {
listen 80;
server_name example.com;
location / {
access_log off;
proxy_pass http://localhost:8123;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 443 ssl;
server_name example.com;
location / {
access_log off;
proxy_pass https://localhost:8122;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $server_name;
}
}
When I open the website it works but certificate seems to be broken and my WebSockets crash.
My question is: how can I pass ssl processing from local NginX to docker NginX so that it will work as expected?
~
~
~
I'm searching for a long time for a solution that can solve my problem. I guess the answer is already given but I'm not searching for the right terms.
I'm using NGINX to forward all requests for port 80 and this works well. Because these ones are forwarded to my own public domain. Now I got a service that I do not want to publish on the internet and just have a different port in my network for it so e.g. 192.168.123.1:10000.
That is what my nginx.conf looks like for exemplary service. I got more server blocks for different services. The important part is the proxy_pass which is set here to be forwarded to the Docker container nextcloudpi. But how can I internally proxy_pass something without a real domain?
server {
listen 80 default_server;
server_name _;
server_name_in_redirect off;
location / {
return 404;
}
}
server {
listen 80;
listen [::]:80;
server_name my-domain.de cloud.my-domain.de www.my-domain.de;
return 301 https://$host$request_uri;
}
# Cloud
server {
server_name cloud.my-domain.de;
#access_log /var/log/nginx/cloud-access.log;
error_log /var/log/nginx/cloud-error.log;
listen 443 ssl http2;
listen [::]:443 ssl http2;
client_max_body_size 100G;
location / {
proxy_send_timeout 1d;
proxy_read_timeout 1d;
proxy_buffering off;
proxy_hide_header Upgrade;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#add_header Front-End-Https on;
proxy_pass https://nextcloudpi;
}
}
I want to use it for invoice ninja for example. How do I set it in Docker then? I normally use expose to let NGINX do everything to do with port 80. But if I want a different internal IP how do I do this? I know how to do it normally in Docker like I tried but that won't work without NGINX:
invoiceninja:
container_name: invoiceninja
image: invoiceninja/invoiceninja:latest
ports:
- 10000:80
restart: always
volumes:
- /storage/appdata/invoiceninja/public:/var/app/public
- /storage/appdata/invoiceninja/storage:/var/app/storage
networks:
- invoiceninja
env_file:
- .secrets/invoiceninja.env
depends_on:
- invoiceninja-db
Basically, how do I forward port 80 of the invoice ninja Docker container to a different port to access it internally like 192.168.123.1:10000.
I'm trying to use nginx as a reverse proxy inside a container points to the different PHP application container.
My PHP container gets requests from external port 8080 and forwards it to internal 80. I want my nginx to get listen to port 80 and forward the request to the PHP container on port 8080 but have issues redirecting the request.
My nginx Dockerfile:
FROM nginx:latest
COPY default.conf /etc/nginx/conf.d/default.conf
My nginx default.conf:
server {
listen 80;
error_page 497 http://$host:80$request_uri;
client_max_body_size 32M;
underscores_in_headers on;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_pass http://php-container:8080;
proxy_read_timeout 90;
proxy_http_version 1.1;
}
}
I've tried deploying it via docker-compose with the above yml file, but got the same error when CURL to the nginx.
When CURL to HTTP://localhost:8080 (PHP application) and also to HTTP://localhost:80 (nginx) there's a log output from the docker-compose log.
But when CURL to nginx, I got the above error:
You have a misconfiguration here.
nginx (host:80 -> container:8080)
php-app (host:8080 -> container:80)
Nginx can't reach of "localhost" of another container because it's a different container network.
I suggest you create a docker network --network and place both containers to the network. Then in Nginx config, you can refer php-app container by name.
proxy_read_timeout 90;
proxy_redirect http://localhost:80/ http://php-container:8080/;
Besides you can expose only the Nginx port and your backend will be safe.
I'm trying to implement ssl in my application using Docker with nginx image. I have two apps, one for back-end (api) and other for front-end (admin). It's working with http on port 80, but I need to use https. This is my nginx config file...
upstream ulib-api {
server 10.0.2.229:8001;
}
server {
listen 80;
server_name api.ulib.com.br;
location / {
proxy_pass http://ulib-api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
upstream ulib-admin {
server 10.0.2.229:8002;
}
server {
listen 80;
server_name admin.ulib.com.br;
location / {
proxy_pass http://ulib-admin;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
I get some tutorials but all is using docker-compose. I need to install it with Dockerfile. Can anyone give me a light?
... I'm using ECS instance on AWS and project is building with CI/CD
This is just one of possible ways:
First issue certificate using certbot. You will end up with a couple of *.pem files.
There are pretty tutorials on installing and running certbot on different systems, I used Ubuntu with command certbot --nginx certonly. You need to run this command on your domain because certbot will check that you are the owner of the domain by a number of challenges.
Second, you create nginx containers. You will need proper nginx.conf and link certificates to this containers. I use docker volumes but that is not the only way.
My nginx.conf looks like following:
http {
server {
listen 443 ssl;
ssl_certificate /cert/<yourdomain.com>/fullchain.pem;
ssl_certificate_key /cert/<yourdomain.com>/privkey.pem;
ssl_trusted_certificate /cert/<yourdomain.com>/chain.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
...
}
}
Last, you run nginx with proper volumes connected:
docker run -d -v $PWD/nginx.conf:/etc/nginx/nginx.conf:ro -v $PWD/cert:/cert:ro -p 443:443 nginx:1.15-alpine
Notice:
I mapped $PWD/cert into container as /cert. This is a folder, where *.pem files are stored. They live under ./cert/example.com/*.pem
Inside nginx.conf you refer these certificates with ssl_... directives
You should expose port 443 to be able to connect
I have an ubuntu 18.0.4 lts box with nginx installed and configuered as a reverse proxy:
/etc/nginx/sites-enabled/default:
server {
server_name example.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://0.0.0.0:3000;
}
I have a website running in a docker container listening on port 3000. With this configuration if I browse to http://example.com I see the site.
I've then installed LetsEncypt using the standard install from their website then I run sudo certbot --nginx and follow the instructions to enable https for mydomain.com.
Now my etc/nginx/sites-enabled/default looks like this and i'm unable to load the site on both https://example.com and http://example.com:
server {
server_name example.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://0.0.0.0:3000;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
Any ideas?
I figured it out. The problem wasn't with my nginx/letsencrypt config it was a networking issue at the provider level (azure).
I noticed the Network Security Group only allowed traffic on port 80. The solution was to add a rule for 443.
After adding this rule everything now works as expected.