Mongoldb over http via nginx (stream) reverse proxy - ruby-on-rails

I am wanting to move a hosted mongo database service to a self hosted solution, behind a firewall. The hosted ruby on rails app connects via MongoMapper. I want to port only the database first, and then maybe the ruby on rails app.
I've set up server_name data.example.com port 80 on nginx 1.9.11 to redirected to another upstream port, localhost:8090.
This is done as I also have this nginx serve a website example.com:80 and www.example.com:80, but I only want data.example.com:80 to connect to the upstream tcp port. I do this as server_name is http section only, i.e. understandably not available in the stream server set-up)
so the "data-web-domain":80 to localhost:8090 config:
server {
listen 80;
server_name data.example.com;
location / {
proxy_pass http://localhost:8090;
}
Now I then pass localhost:8090 to mongodb via stream to allow tcp connection.
stream {
server {
listen 8090;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass stream_mongo_backend;
}
upstream stream_mongo_backend {
server localhost:27017;
}
}
and can browse to as data.example.com
Blockquote
You are trying to access MongoDB on the native driver port. For http diagnostic access, add 1000 to the port number
Ok so the above config allows a web request at the web domain to return an error message back from the mongodb
When connecting to the server directly, via localhost or the server's unix domain name to the port 8090 the stream redirect to 27017 works, i.e. this mongo command line connects
mongo server-unix-name:8090/dummydb -udummyuser -pdummysecret
But via the web domain doesn't
mongo data.example.com:80/dummydb -udummyuser -pdummysecret
What's broken between the data.example.com:80 to the localhost:8090?

Related

Serving LetsEncrypt HTTP challenge when all http traffic is redirected to https

I want to perform the http validation for LetsEncrypt, which requires http only (port 80). I have a Rails Application running nginx, and has all traffic redirected to HTTPS via the following configuration:
server {
listen 80;
listen [::]:80;
return 301 https://$host$request_uri;
}
my two questions:
Is there a dynamic way (such as an API) to add the file path to my nginx file to serve the challenge file?
Is it possible to serve this challenge file when all traffic is being redirected to https?

Dockercontainer with Nginx share the same network but can´t reach each other

recently I'm trying to set up a litte Home Server with a buildin DNS.
The DNS Service is given by lancacheDNS and set up in combination with a Monolithic-Cache (Port 1234) in two docker containers on 192.168.178.11 (Host machine) in my local network.
Since I want to serve a Website(Port 8080) along with some independent APIs (Ports 8081, 8082 or whatsoever) I decided to use Nginx as a reverse Proxy.
The DNS does the following:
getr.me --> 192.168.178.11
The routing works completely fine and getr.me:8080 gives me my website as expected.
Now the tricky part (for me);
Set up Nginx such that:
website.getr.me --> serving website
api1.getr.me --> serving the API1
api2.getr.me --> serving the API2
For that I created a Newtwork "default_dash_nginx".
I edited the nginx to connect to that via:
networks: default: name: default_dash_nginx external: true
Also I connected my website serving container (dashboard) to the network via --network default_dash_nginx.
The serving website gets the IP 172.20.0.4 (received via docker inspect default_dash_nginx) and also the nginx server is connected to the network.
Nginx works and I can edit the admin page.
But unfortunaly event though I edited the proxyHost to the IP + Port of my website receiced from the network, the site is not available. Here the output of my network inspection: https://pastebin.com/jsuPZpqQ
I hope you have another Idea,
thanks in advance,
Maxi
Edit:
The nginx container is actually a NginxReverseProxyManager Container (I don´t know of it was unclear above or simply not important)
The Nginx container can actually Ping the website container ang also get the HTML files from Port 80 from it.
So it seems like the nginx itself isn´t working like it should.
The first answer got no results( I tried to save it as every of the mentioned files
here
Do I have missed something or am I just not smart enough?
nginx config, try and understand
server {
listen 80;
server_name api1.getr.me;
location / {
proxy_pass http://localhost:8081;
}
}
server {
listen 80;
server_name api2.getr.me;
location / {
proxy_pass http://localhost:8082;
}
}
server {
listen 80;
server_name some.getr.me;
location / {
proxy_pass http://localhost:XXXX;
}
}

NGINX localhost upstream configuration

I am running multi services app orchestrated by docker-compose and for testing purposes I want to run it on localhost (MacOS).
With this NGINX configuration:
upstream fe {
server fe:3000;
}
upstream be {
server be:4000;
}
server {
server_name localhost;
listen 80;
location / {
proxy_pass http://fe;
}
location /api/ {
proxy_pass http://be;
}
}
I am able to get FE in browser from http://localhost/ and BE from http://localhost/api/ as expected.
Issue is that FE refusing communicate with BE with this error:
Error: Network error: request to http://localhost/api/graphql failed, reason: connect ECONNREFUSED 127.0.0.1:80
(It's NEXT.JS FE with NODE/EXPRESS/APOLLO-GQL BE)
Note: I need to upstream BE, because I need to download files from email directly with URL.
Am I missing some NGINX headers, DNS configuration etc.?
Thanks in an advance!
Initial call to Apollo is from Next.js (FE container) 'server side', that means BE needs to be addressed to docker network (it cannot be localhost, because for this call is localhost FE container itself). In my case is that call to process.env.BE that is set to http://be:4000.
However for other calls (sending login request from browser) is docker network unknown (calling it from localhost that has no access to docker network) that mean you have to address localhost/api/graphql.
I am able to achieve that functionality just with a small change in my FE httpLink - apollo connecting function:
uri: isBrowser ? `/api/graphql` : `${process.env.BE}/api/graphql`
NGINX config is the same as above.
NOTE: This needs to be handle only on local environment, on remote server it work fine without this 'hack' and address is always domain.com/api/graphql.

docker nginx ssl proxy pass to another container

I have a docker-compose file that right now runs two containers:
version: '3'
services:
nginx-certbot-container:
build: nginx-certbot
restart: always
links:
- ghost-container:ghost-container
ports:
- 80:80
- 443:443
tty: true
ghost-container:
image: ghost
restart: always
ports:
- 2368:2368
I have four websites, l.com, t1.l.com, t2.l.com, t3.l.com, all with ssl certificates done by letsencrypt, and working by that on the URL I can see the green lock etc...
for t2.l.com, I would like that to be a blog from ghost, but with the following nginx conf,
upstream ghost-container {
server ghost-container:2368;
}
server {
server_name t2.l.com;
location / {
proxy_pass https://ghost-container;
proxy_ssl_certificate /etc/letsencrypt/live/l.com/fullchain.pem;
proxy_ssl_certificate_key /etc/letsencrypt/live/l.com/privkey.pem;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
proxy_ssl_ciphers "ECDHE-ECD ... BC3-SHA:!DSS";
proxy_ssl_session_reuse on;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/l.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/l.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
}
server {
listen 80;
listen [::]:80;
server_name t2.l.com;
include /etc/nginx/snippets/letsencrypt.conf;
location / {
return 301 https://t2.l.com$request_uri;
#proxy_pass http://ghost-container;
}
}
If I comment out the return 301, and just keep the proxy_pass, I get redirected to the ghost blog no problem, except its not via ssl, But if i comment out the proxy pass, like above, and return 301, the server returns a 502 bad gateway.
Is there something I'm missing? from other peoples code it seems just having proxy certs is enough...
Edit
Well, I just did something that I was sure would not work, and set the proxy pass in the ssl part to http: instead of https:, and it all worked fine, so if anyone can explain the mechanics or logic behind why this is so, I would be very interested, it doesnt make sense in my mind.
You have to distinguish the connection from a client to nginx (your reverse proxy here) and the connection from nginx to your ghost container.
The connection from a client to the nginx server can be encrypted (https, port 443) or unencrypted (http, 80). In your config file, there is one server block for each. If the client connects via https (after a redirect or directly), nginx will use the key at /etc/letsencrypt/live/l.com/* to encrypt the content of this connection. The content could be served from the file system inside the nginx-certbot-container container or from an upstream server (thus reverse proxy).
For t2.l.com you would like to use the upstream server. Nginx will open a connection to the upstream server. It depends on the server running inside ghost-container whether it expects http or https connection on port 2368. From the information you provided I deduce that it accepts http connections. Otherwise you would need SSL certificates also for the ghost container, or create self-signed certificates and make nginx trust the self-signed upstream connection. This means your proxy_pass should use http. Since the packages of this connection will never leave your computer, I think it is fairly safe to use http for the upstream server in this case.
(If this is not what you intended, you can also create the SSL endpoint in the ghost-container. In this case, nginx has to use SNI to determine the destination host because it only sees encrypted packages. Search for nginx reverse proxy ssl or so.)
Note: Please be careful with the ports property. The above docker-compose file publishes port 2368. So the ghost server can be reached via http://t2.l.com:2368. To avoid this, replace it with expose: [2368].

Deploying a Production Rails app on an Intranet/LAN

My usual deploy setup consists of ubuntu/postgresql/nginx/unicorn running on a vps, I need to set up an app that will only be run on an intranet/LAN(on ubuntu).
Having never done this before what are the differences from a usual vps deployment?
Do I only need to change the server_name in my nginx.conf from;
server {
listen 80;
server_name www.example.com ;
root /home/deployer/example/current/public;
}
to server_name localhost;?
The server name will only be useful if your internal DNS server is configured to point at your web server's IP address. That way other computers on the LAN can find it by its name. If you don't want to get in to configuring an internal DNS, just set the server_name to your web server's IP address and other computers can use the IP to connect to it.
Here's an article about setting nginx in a LAN: http://zaiste.net/2013/03/serving_apps_locally_with_nginx_and_pretty_domains/
Here's an answer about internal DNS: https://superuser.com/questions/45789/running-dns-locally-for-home-network

Resources