docker nginx ssl proxy pass to another container - docker

I have a docker-compose file that right now runs two containers:
version: '3'
services:
nginx-certbot-container:
build: nginx-certbot
restart: always
links:
- ghost-container:ghost-container
ports:
- 80:80
- 443:443
tty: true
ghost-container:
image: ghost
restart: always
ports:
- 2368:2368
I have four websites, l.com, t1.l.com, t2.l.com, t3.l.com, all with ssl certificates done by letsencrypt, and working by that on the URL I can see the green lock etc...
for t2.l.com, I would like that to be a blog from ghost, but with the following nginx conf,
upstream ghost-container {
server ghost-container:2368;
}
server {
server_name t2.l.com;
location / {
proxy_pass https://ghost-container;
proxy_ssl_certificate /etc/letsencrypt/live/l.com/fullchain.pem;
proxy_ssl_certificate_key /etc/letsencrypt/live/l.com/privkey.pem;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
proxy_ssl_ciphers "ECDHE-ECD ... BC3-SHA:!DSS";
proxy_ssl_session_reuse on;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/l.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/l.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
}
server {
listen 80;
listen [::]:80;
server_name t2.l.com;
include /etc/nginx/snippets/letsencrypt.conf;
location / {
return 301 https://t2.l.com$request_uri;
#proxy_pass http://ghost-container;
}
}
If I comment out the return 301, and just keep the proxy_pass, I get redirected to the ghost blog no problem, except its not via ssl, But if i comment out the proxy pass, like above, and return 301, the server returns a 502 bad gateway.
Is there something I'm missing? from other peoples code it seems just having proxy certs is enough...
Edit
Well, I just did something that I was sure would not work, and set the proxy pass in the ssl part to http: instead of https:, and it all worked fine, so if anyone can explain the mechanics or logic behind why this is so, I would be very interested, it doesnt make sense in my mind.

You have to distinguish the connection from a client to nginx (your reverse proxy here) and the connection from nginx to your ghost container.
The connection from a client to the nginx server can be encrypted (https, port 443) or unencrypted (http, 80). In your config file, there is one server block for each. If the client connects via https (after a redirect or directly), nginx will use the key at /etc/letsencrypt/live/l.com/* to encrypt the content of this connection. The content could be served from the file system inside the nginx-certbot-container container or from an upstream server (thus reverse proxy).
For t2.l.com you would like to use the upstream server. Nginx will open a connection to the upstream server. It depends on the server running inside ghost-container whether it expects http or https connection on port 2368. From the information you provided I deduce that it accepts http connections. Otherwise you would need SSL certificates also for the ghost container, or create self-signed certificates and make nginx trust the self-signed upstream connection. This means your proxy_pass should use http. Since the packages of this connection will never leave your computer, I think it is fairly safe to use http for the upstream server in this case.
(If this is not what you intended, you can also create the SSL endpoint in the ghost-container. In this case, nginx has to use SNI to determine the destination host because it only sees encrypted packages. Search for nginx reverse proxy ssl or so.)
Note: Please be careful with the ports property. The above docker-compose file publishes port 2368. So the ghost server can be reached via http://t2.l.com:2368. To avoid this, replace it with expose: [2368].

Related

Dockercontainer with Nginx share the same network but can´t reach each other

recently I'm trying to set up a litte Home Server with a buildin DNS.
The DNS Service is given by lancacheDNS and set up in combination with a Monolithic-Cache (Port 1234) in two docker containers on 192.168.178.11 (Host machine) in my local network.
Since I want to serve a Website(Port 8080) along with some independent APIs (Ports 8081, 8082 or whatsoever) I decided to use Nginx as a reverse Proxy.
The DNS does the following:
getr.me --> 192.168.178.11
The routing works completely fine and getr.me:8080 gives me my website as expected.
Now the tricky part (for me);
Set up Nginx such that:
website.getr.me --> serving website
api1.getr.me --> serving the API1
api2.getr.me --> serving the API2
For that I created a Newtwork "default_dash_nginx".
I edited the nginx to connect to that via:
networks: default: name: default_dash_nginx external: true
Also I connected my website serving container (dashboard) to the network via --network default_dash_nginx.
The serving website gets the IP 172.20.0.4 (received via docker inspect default_dash_nginx) and also the nginx server is connected to the network.
Nginx works and I can edit the admin page.
But unfortunaly event though I edited the proxyHost to the IP + Port of my website receiced from the network, the site is not available. Here the output of my network inspection: https://pastebin.com/jsuPZpqQ
I hope you have another Idea,
thanks in advance,
Maxi
Edit:
The nginx container is actually a NginxReverseProxyManager Container (I don´t know of it was unclear above or simply not important)
The Nginx container can actually Ping the website container ang also get the HTML files from Port 80 from it.
So it seems like the nginx itself isn´t working like it should.
The first answer got no results( I tried to save it as every of the mentioned files
here
Do I have missed something or am I just not smart enough?
nginx config, try and understand
server {
listen 80;
server_name api1.getr.me;
location / {
proxy_pass http://localhost:8081;
}
}
server {
listen 80;
server_name api2.getr.me;
location / {
proxy_pass http://localhost:8082;
}
}
server {
listen 80;
server_name some.getr.me;
location / {
proxy_pass http://localhost:XXXX;
}
}

nginx: http to https not working on chrome

I've got this nginx configuration to redirect http to https:
# http redirects to https
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
...
}
It works properly on firefox. If I add in /etc/hosts entry like:
127.0.0.1 my-custom-domain.com to make sure I have domain that was never used, in firefox if I enter my-custom-domain.com, I get this:
Works as expected, it redirects to https.
But if I do the same on chrome, I get this:
Chrome only opens https one if I explicitly enter https://my-custom-domain.com.. Not sure why it behaves differently on chrome.
P.S. I read some people say that server_name must not be _ and have specific name, but it works the same even if I enter sever_name my-custom-domain.com;
P.S.S I'm using 1.23.0-alpine nginx docker image.
Update
It looks like this issue is related with nginx docker image. I was not able to reproduce this with nginx installed locally. Though nginx images with tags nginx:1.18.0, nginx:1.23.0, nginx:1.23.0-alpine all had same issue.

nginx responds to HTTPS but not HTTP

I am using the dockerized Nextcloud as shown here: https://github.com/nextcloud/docker/tree/master/.examples/docker-compose/with-nginx-proxy-self-signed-ssl/mariadb/fpm
I set this up with port 80 mapped to 12345 and port 443 mapped to 12346. When I go to https://mycloud.example.com:12346, I get the self-signed certificate prompt, but otherwise everything is fine and I see the NextCloud web UI. But when I go to http://mycloud.example.com:12345 nginx (the proxy container) gives error "503 Service Temporarily Unavailable". The error also shows up in proxy's logs.
How can I diagnose the issue? Why is HTTPS working but not HTTP?
Can you provide your docker command starting nextcloud, or docker-compose file ?
Diagnosis is as usual with docker stuff : get the id for the currently running container
docker ps
Then check the logs
docker logs [id or name of your container]
docker-compose logs [name of your service]
Connect in the container
docker exec -ti [id or name of your container] [bash or ash if alpine based container]
There read the nginx conf files involved. In your case I'ld check the redirection being made from http to https, most likely it's something like below with no specific port specified for https, hence port 443, hence not working
server {
listen 80;
server_name my.domain.com;
return 301 https://$server_name$request_uri; <======== no port = 443
}
server {
listen 443 ssl;
server_name my.domain.com;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000" always;
[....]
}

Mongoldb over http via nginx (stream) reverse proxy

I am wanting to move a hosted mongo database service to a self hosted solution, behind a firewall. The hosted ruby on rails app connects via MongoMapper. I want to port only the database first, and then maybe the ruby on rails app.
I've set up server_name data.example.com port 80 on nginx 1.9.11 to redirected to another upstream port, localhost:8090.
This is done as I also have this nginx serve a website example.com:80 and www.example.com:80, but I only want data.example.com:80 to connect to the upstream tcp port. I do this as server_name is http section only, i.e. understandably not available in the stream server set-up)
so the "data-web-domain":80 to localhost:8090 config:
server {
listen 80;
server_name data.example.com;
location / {
proxy_pass http://localhost:8090;
}
Now I then pass localhost:8090 to mongodb via stream to allow tcp connection.
stream {
server {
listen 8090;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass stream_mongo_backend;
}
upstream stream_mongo_backend {
server localhost:27017;
}
}
and can browse to as data.example.com
Blockquote
You are trying to access MongoDB on the native driver port. For http diagnostic access, add 1000 to the port number
Ok so the above config allows a web request at the web domain to return an error message back from the mongodb
When connecting to the server directly, via localhost or the server's unix domain name to the port 8090 the stream redirect to 27017 works, i.e. this mongo command line connects
mongo server-unix-name:8090/dummydb -udummyuser -pdummysecret
But via the web domain doesn't
mongo data.example.com:80/dummydb -udummyuser -pdummysecret
What's broken between the data.example.com:80 to the localhost:8090?

How to dockerize two applications talking to each other via a http server?

TL;DR
How can we setup a docker-compose environment so we can reach a container under multiple, custom defined aliases? (Or any alternative that solves our problem in an other fashion.)
Existing setup
We have two applications† (nodejs servers), each behind an HTTP reverse proxy (Nginx), that need to talk to each other. On localhost, configuring this is easy:
Add /etc/hosts entries for ServerA and ServerB:
127.0.0.1 server-a.testing
127.0.0.1 server-b.testing
Run ServerA on port e.g. 2001 and ServerB on port 2002
Configure two virtual hosts, reverse proxying to ServerA and ServerB:
server { # Forward all traffic for server-a.testing to localhost:2001
listen 80;
server_name server-a.testing;
location / {
proxy_pass http://localhost:2001;
}
}
server { # Forward all traffic for server-b.testing to localhost:2002
listen 80;
server_name server-b.testing;
location / {
proxy_pass http://localhost:2002;
}
}
This setup is great for testing: Both applications can communicate each other in a way that is very close to the production environment, e.g. request('https://server-b.testing', fn); and we can test how the HTTP server configuration interacts with our apps (e.g. TLS config, CORS headers, HTTP2 proxying).
Dockerize all the things!
We now want to move this setup to docker and docker-compose. The docker-compose.yaml that would work in theory is this:
nginx:
build: nginx
ports:
- "80:80"
links:
- server-a
- server-b
server-a:
build: serverA
ports:
- "2001:2001"
links:
- nginx:server-b.testing
server-b:
build: serverB
ports:
- "2002:2002"
links:
- nginx:server-a.testing
So when ServerA addresses http://server-b.testing it actually reaches the Nginx which reverse proxies it to ServerB. Unfortunately, circular dependencies are not possible with links. There are three typical solutions to this problems:
use ambassadors
use nameservers
use the brand new networking (--x-networking).
Neither of these work for us, because, for the virtual hosting to work, we need to be able to address the Nginx container under the name server-a.testing and server-b.testing. What can we do?
(†) Actually it's a little bit more complicated – four components and links – but that shouldn't make any difference to the solution:
testClient (-> Nginx) -> ServerA,
testClient (-> Nginx) -> ServerB,
ServerA (-> Nginx) -> ServerB,
testClient (-> Nginx) -> ServerC,
Try this:
Link you server-a and server-b container to nginx with --link server-a:server-a --link server-b:server-b
Update nginx conf file with
location /sa
proxy_pass http://server-a:2001
location /sb
proxy_pass http://server-a:2001
When you link two containers, docker adds "conatiner_name container_ip" to /etc/hosts file of the linking container. So, in this case, server-a and server-b is resolved to their respective container IPs via /etc/hosts file.
And you can access them from http://localhost/sa or http://localhost/sb

Resources