I'm searching for a long time for a solution that can solve my problem. I guess the answer is already given but I'm not searching for the right terms.
I'm using NGINX to forward all requests for port 80 and this works well. Because these ones are forwarded to my own public domain. Now I got a service that I do not want to publish on the internet and just have a different port in my network for it so e.g. 192.168.123.1:10000.
That is what my nginx.conf looks like for exemplary service. I got more server blocks for different services. The important part is the proxy_pass which is set here to be forwarded to the Docker container nextcloudpi. But how can I internally proxy_pass something without a real domain?
server {
listen 80 default_server;
server_name _;
server_name_in_redirect off;
location / {
return 404;
}
}
server {
listen 80;
listen [::]:80;
server_name my-domain.de cloud.my-domain.de www.my-domain.de;
return 301 https://$host$request_uri;
}
# Cloud
server {
server_name cloud.my-domain.de;
#access_log /var/log/nginx/cloud-access.log;
error_log /var/log/nginx/cloud-error.log;
listen 443 ssl http2;
listen [::]:443 ssl http2;
client_max_body_size 100G;
location / {
proxy_send_timeout 1d;
proxy_read_timeout 1d;
proxy_buffering off;
proxy_hide_header Upgrade;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#add_header Front-End-Https on;
proxy_pass https://nextcloudpi;
}
}
I want to use it for invoice ninja for example. How do I set it in Docker then? I normally use expose to let NGINX do everything to do with port 80. But if I want a different internal IP how do I do this? I know how to do it normally in Docker like I tried but that won't work without NGINX:
invoiceninja:
container_name: invoiceninja
image: invoiceninja/invoiceninja:latest
ports:
- 10000:80
restart: always
volumes:
- /storage/appdata/invoiceninja/public:/var/app/public
- /storage/appdata/invoiceninja/storage:/var/app/storage
networks:
- invoiceninja
env_file:
- .secrets/invoiceninja.env
depends_on:
- invoiceninja-db
Basically, how do I forward port 80 of the invoice ninja Docker container to a different port to access it internally like 192.168.123.1:10000.
Related
I'm trying to use Nginx reverse proxy to expose Ombi publicly. Both Nginx and Ombi are running in containers on an Ubuntu 22 host. Opening http://hostname:3579 (3579 is the port it's using) works fine, and if I open up 3579 in my router then http://MYDOMAIN.dev:3579 works. However, using the config below just returns a 502 Bad Gateway if I try to connect to https://ombi.MYDOMAIN.dev.
Docker-compose.yaml:
services:
ombi:
image: lscr.io/linuxserver/ombi:latest
container_name: ombi
environment:
- PUID=1004
- PGID=1004
- TZ=America/Los_Angeles
# - BASE_URL=/ombi #optional
volumes:
- /mnt/vault/data/ombi/config:/config
ports:
- 3579:3579
restart: unless-stopped
nginx:
image: lscr.io/linuxserver/nginx:latest
container_name: nginx
environment:
- PUID=1000
- PGID=1000
- TZ=America/Los_Angeles
volumes:
- /mnt/vault/data/nginx:/config
- /mnt/vault/data/nginx/certbot/www:/var/www/certbot/:ro
ports:
- 80:80
- 443:443
restart: unless-stopped
Nginx-base.conf
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name ombi.MYDOMAIN.dev;
location / {
proxy_pass http://localhost:3579;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Added the below per the advice of the following Stack Overflow
# https://stackoverflow.com/questions/47091356/docker-nginx-reverse-proxy-gives-502-bad-gateway
proxy_buffering off;
proxy_buffer_size 16k;
proxy_busy_buffers_size 24k;
proxy_buffers 64 4k; }
# This allows access to the actual api
location /api {
proxy_pass http://localhost:3579;
}
# This allows access to the documentation for the api
location /swagger {
proxy_pass http://localhost:3579;
}
}
SSL.conf
Note: /config/keys/ is an obfuscation but Nginx can find the keys and I have registered the appropriate domain through certbot.
ssl_certificate /config/keys/fullchain.pem;
ssl_certificate_key /config/keys/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m;
ssl_dhparam /config/dhparams.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;
Perhaps most confusingly, I don't see anything in the logs. When I run docker logs nginx I just get the system startup logs, and when I check the logs in the Ombi UI it doesn't mention anything about failed connections. I'm at a loss as to how to troubleshoot this.
I've tried a bunch of variations here, including (a) turning Ombi's base_url on/off and (b) setting up the reverse proxy as a URI path, i.e. https://MYDOMAIN.dev/ombi. Anyone who can help me figure this out will earn my undying gratitude.
Well, I figured it out just a few minutes after posting here. I believe the problem is that localhost means something different to the containers than it does to the host server. I fixed this be replacing localhost with the IP address of the host machine and everything started working.
I also streamlined things by specifying an upstream source. The conf file looks like this now:
upstream ombiserver {
server 192.168.4.119:3579;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name ombi.jsmg.dev;
location / {
proxy_pass http://ombiserver;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off; # for a single server setup (SSL termination of Varnish), where no caching is done in NGINX itself
proxy_buffer_size 16k; # should be enough for most PHP websites, or adjust as above
proxy_busy_buffers_size 24k; # essentially, proxy_buffer_size + 2 small buffers of 4k
proxy_buffers 64 4k; # should be enough for most PHP websites, adjust as above to get an accurate value
}
# This allows access to the actual api
location /api {
proxy_pass http://ombiserver;
}
# This allows access to the documentation for the api
location /swagger {
proxy_pass http://ombiserver;
}
}
Problem:
We've setup a docker container running on port 3002 and then configured port 3002 to /path/ on my domain www.example.com. There's an express rest api is running on 3002 port container which outputs the req.hostname and when I make a request from let's say www.abc.com, the consoled value of req.hostname is shown to be www.example.com instead of www.abc.com.
Nginx Conf
server {
listen 443 ssl;
ssl_certificate /etc/ssl/__abc.crt;
ssl_certificate_key /etc/ssl/abc.key;
listen 80 default_server;
listen [::]:80 default_server;
location / {
proxy_pass http://localhost:3001/;
proxy_set_header Host $host;
}
location /path/ {
proxy_pass http://localhost:3002/;
proxy_set_header Host $http_host;
}
}
What changes do I have to make so I can get the www.abc.com in consoled value?
Nginx's location blocks should be ordered such that more specific expressions come first.
In your example, you should have:
location /path/ {
proxy_pass http://localhost:3002/;
proxy_set_header Host $http_host;
}
location / {
proxy_pass http://localhost:3001/;
proxy_set_header Host $host;
}
Make sure your changes take effect by either running nginx -s reload or restarting the container
I am trying to build a service with two parts: a backend and a frontend. Both are on a different docker container and are communicating through the docker-compose configuration and a nginx container.
For https access, everything's good, but when I am trying to work with websocket, I have an upgrade error, even if the Nginx configuration got this information
Error message : websocket: the client is not using the websocket protocol: 'upgrade' token not found in 'Connection' header
I am using a fasthttp and fasthttp/websocket for my Golang backend. The code is working on localhost (no nginx configuration), but the combinaison of Docker + nginx seems to break something.
The front-end works with react and is a simple let socket = new WebSocket(wss.mydomain.com/ws/uploadPicture/);
EDIT :
when using ctx.Request.Header.ConnectionUpgrade() just before upgrader.Upgrade, the result is true, but ctx.Response.Header.ConnectionUpgrade() is false
Thank you !
Golang Backend
var upgrader = websocket.FastHTTPUpgrader{
ReadBufferSize: 1024,
WriteBufferSize: 1024,
CheckOrigin: func(ctx *fasthttp.RequestCtx) bool {
return true
},
}
func InitRouter() func(*fasthttp.RequestCtx) {
router := fasthttprouter.New()
router.GET("/ws/uploadPicture/", doWS)
return router.Handler
}
func doWS(ctx *fasthttp.RequestCtx) {
err := upgrader.Upgrade(ctx, func(conn *websocket.Conn) {
//SHOULD DO STUFF
})
if (err != nil) {
logs.Error(err.Error()) //HIT THIS ERROR
return
}
}
...
fasthttp.ListenAndServe(`:8000`, InitRouter())
Nginx.conf
#############################################################################
## NGINX CONFIGURATION FOR THE WEBAPP
#############################################################################
upstream webapp {
server webapp:3000;
keepalive 4;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name mydomain.com;
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://webapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
#############################################################################
## NGINX CONFIGURATION FOR THE PROXY
#############################################################################
upstream proxy {
server proxy:8000;
keepalive 4;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name api.mydomain.com;
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Credentials' true;
proxy_pass http://proxy;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
#############################################################################
## NGINX CONFIGURATION FOR THE PROXY
#############################################################################
upstream proxyws {
server proxy:8000;
keepalive 4;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name wss.mydomain.com;
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection upgrade;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://proxyws;
}
}
Docker-compose
#############################################################################
## IMAGE FOR THE PROXY
#############################################################################
proxy:
container_name: proxy
build: ./src/Proxy
restart: always
env_file: .env
ports:
- "8000:8000"
#############################################################################
## IMAGE FOR THE WEBAPP
#############################################################################
webapp:
container_name: webapp
build: ./src/Webapp
restart: always
volumes:
- ./src/Webapp:/home/app
- /home/app/.next
- /home/app/node_modules
ports:
- 3000:3000
#############################################################################
## IMAGE THE NGINX & CERTBOT FOR REVERSE PROXY
#############################################################################
nginx:
image: nginx:latest
container_name: nginx
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- 80:80
- 443:443
I'm the maintainer of websocket package for fasthttp. The issue is fixed on master branch.
Downdload it with:
go get github.com/fasthttp/websocket#master
And try again, please.
If the issue continue, please open an issue in https://github.com/fasthttp/websocket/issues
Soon, I will release a new version.
Turns out it's not an issue with Nginx/Docker, but with the http package in golang that is supposed to upgrade the connection.
I was using fasthttp with fasthttp/websocket wich is supposed to work, but which does not in a docker container.
I tried to switch to httprouter with the official (not fork as for the fasthttp ones) gorilla/websocket and the connection successfully upgrade.
Gonna check where the issue come from !
I believe upgrade should be a string:
proxy_set_header Connection "upgrade";
I'm having a particularly odd behaviour with my docker-compose and nginx setup. I am trying to have nginx proxy_pass requests to a backend web service. The web service is Spring Boot, but I don't believe that's relevant. My web service will issue a 302 redirect to users who are not authenticated, sending them to a /login page. This all seems to work as expected except for an indeterminate period of time when I bring the docker-compose up. Early requests to the service which result in 302 responses result in timeouts, whilst requests direct to the /login page return as expected immediately. After an indeterminate period of time, usually minutes, something seems to stabilise and everything works as expected. I've verified the behaviour using chrome from a client machine and curl directly on the host running the compose. I believe the 302 responses are somehow getting dropped but I'm not sure.
Can anyone spot a problem?
version: '3.5'
services:
proxy:
image: nginx:latest
container_name: "proxy"
restart: always
volumes:
- blah:/etc/nginx
ports:
- 80:80
- 443:443
webservice:
image: "webservice:latest"
container_name: "webservice"
restart: always
ports:
- 8080:8080
Nginx Config:
worker_processes auto;
events { }
http {
server {
listen 80 default_server;
listen 443 default_server ssl;
ssl_certificate /etc/nginx/cert;
ssl_certificate_key /etc/nginx/key;
if ($scheme = http) {
return 301 https://$host:443$request_uri;
}
gzip on;
gzip_types text/plain application/xml application/json application/javascript;
location / {
proxy_http_version 1.1;
proxy_pass http://webservice:8080/;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
access_log /etc/nginx/log/access.log combined;
error_log /etc/nginx/log/error.log warn;
}
}
}
Does anyone see what I did wrong with my Nginx Reverse Proxy? I am getting a 502 Bad Gateway and I can't seem to figure out where my ports are wrong.
Nginx
/etc/nginx/sites-enabled/default
upstream reverse_proxy {
server 35.237.158.31:8080;
}
server {
listen 80;
server_name 35.237.158.31;
location / {
proxy_pass http://reverse_proxy;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
/etc/nginx/sites-enabled/jesse.red [VHOST]
upstream jessered {
server 127.0.0.1:2600; # <-- PORT 2600
}
server {
server_name jesse.red;
#root /var/www/jesse.red/;
# ---------------------------------------------------------------
# Location
# ---------------------------------------------------------------
location / {
proxy_pass http://jessered;
#proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 90;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/jesse.red/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/jesse.red/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = jesse.red) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name jesse.red;
listen 80;
return 404; # managed by Certbot
}
Docker
Below it's running on 2600
$ docker ps
9d731afed500 wordpress:php7.0-fpm-alpine "docker-entrypoint.s…" 3 days ago Up 17 hours 9000/tcp, 0.0.0.0:2600->80/tcp jesse.red
/var/www/jesse.red/docker-compose.yml
version: '3.1'
services:
jessered:
container_name: jesse.red
image: wordpress:4-fpm-alpine
restart: always
ports:
- 2600:80 # <-- PORT 2600
env_file:
- ./config.env # Contains .gitignore params
Testing Docker
docker-compose logs
Attaching to jesse.red
jesse.red | WordPress not found in /var/www/html - copying now...
jesse.red | Complete! WordPress has been successfully copied to /var/www/html
jesse.red | [03-Jul-2018 11:15:07] NOTICE: fpm is running, pid 1
jesse.red | [03-Jul-2018 11:15:07] NOTICE: ready to handle connections
System
$ ps aux | grep 2600
Below, port 2600 is in use.
root 1885 0.0 0.1 232060 3832 ? Sl Jul02 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 2600 -container-ip 172.20.0.2 -container-port 80
I'm not sure what went wrong, any help is really appreciated. I have scoured many places and haven't figured it out before asking.
Nginx request processing chooses a server block like this:
Check listen directive for IP:port exact matches, if no matches then check for IP OR port matches. IP addresses with no port are considered to be port 80.
From those matches it then checks the Host header of the request looking to match a server_name directive in the matched blocks. If it finds a match then that server handles the request, if not then assuming no default_server directive is set the request will be passed to the server listed first in your config.
So you have server_name 35.237.158.31; on port 80, and server_name jesse.red; also on port 80
IP addresses should be part of the listen directive, not the server_name, although this might match for some requests. Assuming this is being accessed from the outside world it's unlikely jesse.red will be in anyone's host headers.
Assuming no matches then it's going to get passed to whatever server Nginx finds first with a port match, I'm assuming Nginx will work alphabetically when including files, so your configs will load like this:
/etc/nginx/sites-enabled/default
/etc/nginx/sites-enabled/jesse.red
and now all your requests on port 80 with no host match, or with the ip address in the host field are getting proxied to:
upstream reverse_proxy {
server 35.237.158.31:8080;
}
That's my guess anyway, your Nginx logs will probably give you a fairly definitive answer.