two docker app container on one nginx - docker

I try to serve 2 web applications that should be powered by hhvm. It is easy to build up one docker image that includes nginx and the default.conf. But now where I will get n apps as microservices I want to test them and share the nginx container as I proceed with others like DB e.g.
So when nginx is externally accessed with hhvm do I have to provide hhvm on this image too? Or can I refer it to the debian where hhvm is already provided? Then, I could store the nginx.conf with something like this:
upstream api.local.io {
server 127.0.0.1:3000;
}
upstream booking.local.io {
server 127.0.0.1:5000;
}
How can I set up a proper nginx container for this?

Yeah, you can create another nginx container with an nginx.conf that is configured similarly to this:
upstream api {
# Assuming this nginx container can access 127.0.0.1:5000
server 127.0.0.1:3000;
server server2.local.io:3000;
}
upstream booking {
# Assuming this nginx container can access 127.0.0.1:5000
server 127.0.0.1:5000;
server server2.local.io:5000;
}
server {
name api.local.io;
location / {
proxy_pass http://api;
}
}
server {
name booking.local.io;
location / {
proxy_pass http://booking;
}
}

Related

Dockercontainer with Nginx share the same network but can´t reach each other

recently I'm trying to set up a litte Home Server with a buildin DNS.
The DNS Service is given by lancacheDNS and set up in combination with a Monolithic-Cache (Port 1234) in two docker containers on 192.168.178.11 (Host machine) in my local network.
Since I want to serve a Website(Port 8080) along with some independent APIs (Ports 8081, 8082 or whatsoever) I decided to use Nginx as a reverse Proxy.
The DNS does the following:
getr.me --> 192.168.178.11
The routing works completely fine and getr.me:8080 gives me my website as expected.
Now the tricky part (for me);
Set up Nginx such that:
website.getr.me --> serving website
api1.getr.me --> serving the API1
api2.getr.me --> serving the API2
For that I created a Newtwork "default_dash_nginx".
I edited the nginx to connect to that via:
networks: default: name: default_dash_nginx external: true
Also I connected my website serving container (dashboard) to the network via --network default_dash_nginx.
The serving website gets the IP 172.20.0.4 (received via docker inspect default_dash_nginx) and also the nginx server is connected to the network.
Nginx works and I can edit the admin page.
But unfortunaly event though I edited the proxyHost to the IP + Port of my website receiced from the network, the site is not available. Here the output of my network inspection: https://pastebin.com/jsuPZpqQ
I hope you have another Idea,
thanks in advance,
Maxi
Edit:
The nginx container is actually a NginxReverseProxyManager Container (I don´t know of it was unclear above or simply not important)
The Nginx container can actually Ping the website container ang also get the HTML files from Port 80 from it.
So it seems like the nginx itself isn´t working like it should.
The first answer got no results( I tried to save it as every of the mentioned files
here
Do I have missed something or am I just not smart enough?
nginx config, try and understand
server {
listen 80;
server_name api1.getr.me;
location / {
proxy_pass http://localhost:8081;
}
}
server {
listen 80;
server_name api2.getr.me;
location / {
proxy_pass http://localhost:8082;
}
}
server {
listen 80;
server_name some.getr.me;
location / {
proxy_pass http://localhost:XXXX;
}
}

Setup nginx not to crash if host in upstream is not found with upstream group

I have read question 32845674 (and many others) and the answers to it, but I don't understand how to apply the solution shown to my case:
In my nginx configuration, upstreams are used specifically for grouping servers (which are multiple docker containers that differ in numbers at the end, e.g. portal_frontend_dev_frontend_1, portal_frontend_dev_frontend_2 etc.)
Also, nginx (as an another docker container) serves as a balancer on one physical server between three environments represented by different containers - dev, staging and production, each of them is represented by several containers of the same name and accepts requests through nginx.
Of course, I don't want to find that nginx crashes and access to all environments (including production) has disappeared cause, for example, the frontend_dev containers have fallen when I restart it (e.g. to update config).
So how do I need use variables with upstreams?
P.S. simply adding a 127.0.0.11 resolver doesn't help - if any of the upstream containers is not running - nginx won't start with the error
[emerg] host not found in upstream "portal_frontend_dev_frontend_1:8080" in /etc/nginx/conf.d/staging.conf:18
Additionally: do not be confused by the fact that the dev is in the staging config - I have a separation between dev and staging occurs at the locations, because I don't have a third domain name yet. This fact is immutable and doesn't depend on me.
That's my /etc/nginx/conf.d/staging.conf
upstream portal_frontend_stage {
server portal_frontend_stage_frontend_1:8080 max_conns=1 max_fails=0;
server portal_frontend_stage_frontend_2:8080 max_conns=1 max_fails=0;
}
upstream portal_frontend_dev {
server portal_frontend_dev_frontend_1:8080 max_conns=1 max_fails=0;
server portal_frontend_dev_frontend_2:8080 max_conns=1 max_fails=0;
}
upstream portal_backend_dev {
server portal_backend_dev_backend_1:80 max_conns=1 max_fails=0;
server portal_backend_dev_backend_2:80 max_conns=1 max_fails=0;
}
upstream portal_backend_stage {
server portal_backend_stage_backend_1:80 max_conns=1 max_fails=0;
server portal_backend_stage_backend_2:80 max_conns=1 max_fails=0;
}
server {
listen 80 default_server;
server_name "example.staging.com";
resolver 127.0.0.11;
include conf.d/logs.inc;
include conf.d/cloudflare.inc;
include conf.d/proxy.inc;
include conf.d/develop.inc;
include conf.d/stage.inc;
include conf.d/google-analytics.inc;
}

NGINX localhost upstream configuration

I am running multi services app orchestrated by docker-compose and for testing purposes I want to run it on localhost (MacOS).
With this NGINX configuration:
upstream fe {
server fe:3000;
}
upstream be {
server be:4000;
}
server {
server_name localhost;
listen 80;
location / {
proxy_pass http://fe;
}
location /api/ {
proxy_pass http://be;
}
}
I am able to get FE in browser from http://localhost/ and BE from http://localhost/api/ as expected.
Issue is that FE refusing communicate with BE with this error:
Error: Network error: request to http://localhost/api/graphql failed, reason: connect ECONNREFUSED 127.0.0.1:80
(It's NEXT.JS FE with NODE/EXPRESS/APOLLO-GQL BE)
Note: I need to upstream BE, because I need to download files from email directly with URL.
Am I missing some NGINX headers, DNS configuration etc.?
Thanks in an advance!
Initial call to Apollo is from Next.js (FE container) 'server side', that means BE needs to be addressed to docker network (it cannot be localhost, because for this call is localhost FE container itself). In my case is that call to process.env.BE that is set to http://be:4000.
However for other calls (sending login request from browser) is docker network unknown (calling it from localhost that has no access to docker network) that mean you have to address localhost/api/graphql.
I am able to achieve that functionality just with a small change in my FE httpLink - apollo connecting function:
uri: isBrowser ? `/api/graphql` : `${process.env.BE}/api/graphql`
NGINX config is the same as above.
NOTE: This needs to be handle only on local environment, on remote server it work fine without this 'hack' and address is always domain.com/api/graphql.

Docker swarm reverse proxy+load balancing with Nginx

I have a docker compose file with 2 services: joomla and phpmyadmin.
I need a reverse proxy which behaves like below:
path: / --> joomla service
path: /managedb --> phpmyadmin service
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://joomla;
}
location /managedb {
proxy_pass http://phpmyadmin;
}
}
Everthing works well, however I'd need to add load balancing to balance work between my 3 machines in docker swarm.
They all are VM on the same LAN with static IP 192.168.75.11/12/13.
The Nginx way to add load balancing should be the follow:
upstream joomla_app {
server 192.168.75.11;
server 192.168.75.12;
server 192.168.75.13;
}
upstream phpmyadmin_app {
server 192.168.75.11;
server 192.168.75.12;
server 192.168.75.13;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://joomla_app;
}
location /managedb {
proxy_pass http://phpmyadmin_app;
}
}
However, since the only exposed port is the Ngxinx 80 one because i need it as reverse proxy too, the code above is obviously not working.
So how can I add the load balancing in this scenario?
Thank you in advance!
In docker swarm, you don't need own load balancer, it has a built in one. Simply scale your services and that's all. Swarm name resolver will resolve joomla and phpmyadmin either to a virtual ip that will be a swarm lb for that service or if you configure service to work in dnsrr mode, will use a dns round-robin method when resolving servicename-hostname to container ip.
However, if you want to distribute services across nodes in swarm, that's a different thing. In this case, you can set placement restrictions for each service or set them to be "global" instead replicated - see https://docs.docker.com/engine/swarm/services/#control-service-placement

Nginx Reverse Proxy with Dynamic Containers

I have a reverse proxy with nginx set up using docker compose. It is fully working when I run all services together with docker-compose up. However, I want to be able to run individual containers, and start (docker-compose up service1) and stop them independently from the proxy container. Here is a snippet from my current nginx config:
server {
listen 80;
location /service1/ {
proxy_pass http://service1/;
}
location /service2/ {
proxy_pass http://service2/;
}
}
Right now if I run service1, service2, and the proxy together all is well. However, if I run the proxy and only service2, for example, I get the following error: host not found in upstream "service1" in /etc/nginx/conf.d/default.conf:13. The behavior I want here is to just throw some HTTP error, and when that service does come up to route to it appropriately.
Is there any way to get this behavior?
Your issue is with nginx. It will fail to start if it cannot resolve one of the upstream hostnames.
In your case the docker service name will be unresolvable if the service is not up.
Try one of the solutions here, such as resolving at the location level.
(edit) The below example works for me:
events {
worker_connections 4096;
}
http {
server {
location /service1 {
resolver 127.0.0.11;
set $upstream http://service1:80;
proxy_pass $upstream;
}
location /service2 {
resolver 127.0.0.11;
set $upstream2 http://service2:80;
proxy_pass $upstream2;
}
}
}
Sounds like you need to use load balancing. I believe with load balancing it will attempt to share the load across servers/services. If one goes down, it should automatically use the others.
Example
http {
upstream myapp1 {
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
}
}
}
Docs: http://nginx.org/en/docs/http/load_balancing.html

Resources