Reverse Nginx work with one config/location but with second not - docker

this is my following setup
raspberry pi as (home) server
port-forwording 443 from rooter to my pi
reverse-nginx on the pi and listen to 443 and forward the requests to my services (docker-applications)
using noip for a ddns-name for my public-ip so i can access my pi/reverse-proxy over the ddns-name
I have 3 docker-images nginx(reverse-proxy), owncloud and nexus.
I will access my owncloud over the internet via the ddns-name/owncloud and my nexus via ddns-name/nexus.
I have two *.conf-files for my reverse-nginx one for owncloud and one for nexus.
See the folowing files
owncloud config
upstream owncloud {
server 192.168.0.155:8080;
}
server {
listen 443 ssl http2;
include /etc/nginx/ssl.conf;
location /owncloud {
proxy_pass http://owncloud;
include common_location.conf;
}
}
nexus config
upstream nexus {
server 192.168.0.155:8081;
}
server {
listen 443 ssl http2;
include /etc/nginx/ssl.conf;
location /nexus {
proxy_pass http://nexus;
include common_location.conf;
}
}
If i only activate the owncloud-config without the nexus-config, all working correct. I can access my owncloud on the pi over the internet with ddns-name/owncloud over https.
If both configs are active (owncloud and nexus) only nexus works over ddns-name/nexus. If I try to accessing ddns-name/owncloud I get an 404 not found.
I have no idea why this is not working. I'm not so familiar with nginx and try out much tutorials but nothing helps.
I think I missed something :)
Hope anyone can help me.
Thanks

Related

Dockercontainer with Nginx share the same network but can´t reach each other

recently I'm trying to set up a litte Home Server with a buildin DNS.
The DNS Service is given by lancacheDNS and set up in combination with a Monolithic-Cache (Port 1234) in two docker containers on 192.168.178.11 (Host machine) in my local network.
Since I want to serve a Website(Port 8080) along with some independent APIs (Ports 8081, 8082 or whatsoever) I decided to use Nginx as a reverse Proxy.
The DNS does the following:
getr.me --> 192.168.178.11
The routing works completely fine and getr.me:8080 gives me my website as expected.
Now the tricky part (for me);
Set up Nginx such that:
website.getr.me --> serving website
api1.getr.me --> serving the API1
api2.getr.me --> serving the API2
For that I created a Newtwork "default_dash_nginx".
I edited the nginx to connect to that via:
networks: default: name: default_dash_nginx external: true
Also I connected my website serving container (dashboard) to the network via --network default_dash_nginx.
The serving website gets the IP 172.20.0.4 (received via docker inspect default_dash_nginx) and also the nginx server is connected to the network.
Nginx works and I can edit the admin page.
But unfortunaly event though I edited the proxyHost to the IP + Port of my website receiced from the network, the site is not available. Here the output of my network inspection: https://pastebin.com/jsuPZpqQ
I hope you have another Idea,
thanks in advance,
Maxi
Edit:
The nginx container is actually a NginxReverseProxyManager Container (I don´t know of it was unclear above or simply not important)
The Nginx container can actually Ping the website container ang also get the HTML files from Port 80 from it.
So it seems like the nginx itself isn´t working like it should.
The first answer got no results( I tried to save it as every of the mentioned files
here
Do I have missed something or am I just not smart enough?
nginx config, try and understand
server {
listen 80;
server_name api1.getr.me;
location / {
proxy_pass http://localhost:8081;
}
}
server {
listen 80;
server_name api2.getr.me;
location / {
proxy_pass http://localhost:8082;
}
}
server {
listen 80;
server_name some.getr.me;
location / {
proxy_pass http://localhost:XXXX;
}
}

NGINX localhost upstream configuration

I am running multi services app orchestrated by docker-compose and for testing purposes I want to run it on localhost (MacOS).
With this NGINX configuration:
upstream fe {
server fe:3000;
}
upstream be {
server be:4000;
}
server {
server_name localhost;
listen 80;
location / {
proxy_pass http://fe;
}
location /api/ {
proxy_pass http://be;
}
}
I am able to get FE in browser from http://localhost/ and BE from http://localhost/api/ as expected.
Issue is that FE refusing communicate with BE with this error:
Error: Network error: request to http://localhost/api/graphql failed, reason: connect ECONNREFUSED 127.0.0.1:80
(It's NEXT.JS FE with NODE/EXPRESS/APOLLO-GQL BE)
Note: I need to upstream BE, because I need to download files from email directly with URL.
Am I missing some NGINX headers, DNS configuration etc.?
Thanks in an advance!
Initial call to Apollo is from Next.js (FE container) 'server side', that means BE needs to be addressed to docker network (it cannot be localhost, because for this call is localhost FE container itself). In my case is that call to process.env.BE that is set to http://be:4000.
However for other calls (sending login request from browser) is docker network unknown (calling it from localhost that has no access to docker network) that mean you have to address localhost/api/graphql.
I am able to achieve that functionality just with a small change in my FE httpLink - apollo connecting function:
uri: isBrowser ? `/api/graphql` : `${process.env.BE}/api/graphql`
NGINX config is the same as above.
NOTE: This needs to be handle only on local environment, on remote server it work fine without this 'hack' and address is always domain.com/api/graphql.

Dockerized nginix cannot resolve DNS name

I'm trying to configure nginx to work as a reverse proxy for the proget application. Everything works fine if I use IP in browser. Unfortunately for some reason it doesn't work at domain name like example.com. I host applications on the digitalocean droplet. I have DNS configured there too.
Nginix configuration below:
upstream proget{
server proget;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://proget;
}
}
I create other containers according to the documentation: https://docs.inedo.com/docs/proget/installation/installation-guide/linux-docker
I met similar problem in a k8s cluster before. And I fixed it by adding resolver directive to my nginx config.

Docker swarm reverse proxy+load balancing with Nginx

I have a docker compose file with 2 services: joomla and phpmyadmin.
I need a reverse proxy which behaves like below:
path: / --> joomla service
path: /managedb --> phpmyadmin service
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://joomla;
}
location /managedb {
proxy_pass http://phpmyadmin;
}
}
Everthing works well, however I'd need to add load balancing to balance work between my 3 machines in docker swarm.
They all are VM on the same LAN with static IP 192.168.75.11/12/13.
The Nginx way to add load balancing should be the follow:
upstream joomla_app {
server 192.168.75.11;
server 192.168.75.12;
server 192.168.75.13;
}
upstream phpmyadmin_app {
server 192.168.75.11;
server 192.168.75.12;
server 192.168.75.13;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://joomla_app;
}
location /managedb {
proxy_pass http://phpmyadmin_app;
}
}
However, since the only exposed port is the Ngxinx 80 one because i need it as reverse proxy too, the code above is obviously not working.
So how can I add the load balancing in this scenario?
Thank you in advance!
In docker swarm, you don't need own load balancer, it has a built in one. Simply scale your services and that's all. Swarm name resolver will resolve joomla and phpmyadmin either to a virtual ip that will be a swarm lb for that service or if you configure service to work in dnsrr mode, will use a dns round-robin method when resolving servicename-hostname to container ip.
However, if you want to distribute services across nodes in swarm, that's a different thing. In this case, you can set placement restrictions for each service or set them to be "global" instead replicated - see https://docs.docker.com/engine/swarm/services/#control-service-placement

two docker app container on one nginx

I try to serve 2 web applications that should be powered by hhvm. It is easy to build up one docker image that includes nginx and the default.conf. But now where I will get n apps as microservices I want to test them and share the nginx container as I proceed with others like DB e.g.
So when nginx is externally accessed with hhvm do I have to provide hhvm on this image too? Or can I refer it to the debian where hhvm is already provided? Then, I could store the nginx.conf with something like this:
upstream api.local.io {
server 127.0.0.1:3000;
}
upstream booking.local.io {
server 127.0.0.1:5000;
}
How can I set up a proper nginx container for this?
Yeah, you can create another nginx container with an nginx.conf that is configured similarly to this:
upstream api {
# Assuming this nginx container can access 127.0.0.1:5000
server 127.0.0.1:3000;
server server2.local.io:3000;
}
upstream booking {
# Assuming this nginx container can access 127.0.0.1:5000
server 127.0.0.1:5000;
server server2.local.io:5000;
}
server {
name api.local.io;
location / {
proxy_pass http://api;
}
}
server {
name booking.local.io;
location / {
proxy_pass http://booking;
}
}

Resources