Docker Swarm and Nginx + Varnish - docker

I have a docker-compose file that contains Nginx, PhpFpm and Varnish.
My Nginx works this way:
User connect to the website => Nginx (443) => Varnish (80) => Nginx (8080) => Phpfpm:9000 or others..
My project work well with "depends_on" config inside docker-compose.
But, with docker swarm, the depends_on is ignored.
There my problems starts..
My Varnish container, needs Nginx to be running, or its crash, due to the hostname defined on top of the configuration file :
# varnish config file
backend default {
.host = "nginx";
.port = "8080";
.connect_timeout = 10s;
.first_byte_timeout = 10s;
.between_bytes_timeout = 10s;
}
And, my nginx, needs varnish to be running or it's crash too...
# pass to varnish
location / {
proxy_pass http://varnish;
}
upstream varnish {
server varnish:80;
}
Soooo, varnish crashs because nginx is not up and nginx crashs because varnish is not up.
Is there any solution for this problem ?

The reason why the Varnish container would fail is because the nginx hostname cannot be resolved to an IP address.
It is possible in docker-compose.yml to assign static IP addresses to the various containers. Please consider fixed IP addresses and assigning one of them to the .host property of your Varnish backend.
This way you avoid the cyclical dependency. Even if the IP address doesn't yet exist, Varnish won't complain. Varnish only connects to the backend when it a cache miss or cache bypass occurs.

Related

Dockercontainer with Nginx share the same network but can´t reach each other

recently I'm trying to set up a litte Home Server with a buildin DNS.
The DNS Service is given by lancacheDNS and set up in combination with a Monolithic-Cache (Port 1234) in two docker containers on 192.168.178.11 (Host machine) in my local network.
Since I want to serve a Website(Port 8080) along with some independent APIs (Ports 8081, 8082 or whatsoever) I decided to use Nginx as a reverse Proxy.
The DNS does the following:
getr.me --> 192.168.178.11
The routing works completely fine and getr.me:8080 gives me my website as expected.
Now the tricky part (for me);
Set up Nginx such that:
website.getr.me --> serving website
api1.getr.me --> serving the API1
api2.getr.me --> serving the API2
For that I created a Newtwork "default_dash_nginx".
I edited the nginx to connect to that via:
networks: default: name: default_dash_nginx external: true
Also I connected my website serving container (dashboard) to the network via --network default_dash_nginx.
The serving website gets the IP 172.20.0.4 (received via docker inspect default_dash_nginx) and also the nginx server is connected to the network.
Nginx works and I can edit the admin page.
But unfortunaly event though I edited the proxyHost to the IP + Port of my website receiced from the network, the site is not available. Here the output of my network inspection: https://pastebin.com/jsuPZpqQ
I hope you have another Idea,
thanks in advance,
Maxi
Edit:
The nginx container is actually a NginxReverseProxyManager Container (I don´t know of it was unclear above or simply not important)
The Nginx container can actually Ping the website container ang also get the HTML files from Port 80 from it.
So it seems like the nginx itself isn´t working like it should.
The first answer got no results( I tried to save it as every of the mentioned files
here
Do I have missed something or am I just not smart enough?
nginx config, try and understand
server {
listen 80;
server_name api1.getr.me;
location / {
proxy_pass http://localhost:8081;
}
}
server {
listen 80;
server_name api2.getr.me;
location / {
proxy_pass http://localhost:8082;
}
}
server {
listen 80;
server_name some.getr.me;
location / {
proxy_pass http://localhost:XXXX;
}
}

NGINX Server to Redirect to Docker Container

The following i want to achieve:
On Server A there is docker installed. There are, lets say, 3 Containers:
Container 1: App1, ip: 172.17.0.2, network: mynet, Simple HTML Welcome page, accessible by port 80
Container 2: App2, ip: 172.17.0.3, network: mynet, a Wiki System -> dokuwiki, accessible by port 8080
Container 3: App3, ip: 172.17.0.4, network: mynet, something else
You can see, every container are in the same Docker network. The Containers are accessible by different Ports.
The Clients on the same network needs to access all of the Containers. I can't use DNS in this case (Reverse Proxy via VHOST), because i am not control the DNS. My Goal:
Container 1 : accessible via http://myserver.home.local/app1/
Container 2 : accessible via http://myserver.home.local/app2/
Container 3 : accessible via http://myserver.home.local/app3/
What i did to solve this is the following: Add another Container with nginx, and do proxy_pass to the containers. I use the official nginx image (docker pull nginx), then i mount my custom config into the /etc/nginx/conf.d dir. My Config looks like the follow:
server {
location / {
root /usr/share/nginx/html;
index: index.html index.htm;
}
location /app1/ {
proxy_pass http://app1/
}
location /app2/ {
proxy_pass http://app2:8080/
}
location /app3/ {
proxy_pass http://app3/
}
}
The app1 does work. The app2 does not: It prints me some ugly html output. In the Browser Web Console, i see a lot of 404. I guess that has something to do with Reverse / Rewrite of nginx, because, the app2 is Dokuwiki. I also add the apache ProxyPassReverse equivalent for nginx, without success.
I just do not know what to do in this case, or where to start. How can i know, what to be rewrite? I hope someone can help me.
As mentioned in the comments:
As soon as I use the dokuwiki basedir / baseurl config, the proxy is working as expected. To do so, edit the dokuwiki.php configuration file located in the conf folder:
conf/dokuwiki.php
change the following settings to your environment
$conf['basedir'] = '/dokuwiki';
$conf['baseurl'] = '';

docker nginx proxy nginx connect() failed (111: Connection refused) while connecting to upstream

I'm trying to run a nginx container as the main entry point for all of my websites and web services. I managed to run a portainer as a container, and I'm able to reach it from the internet. Right now I'm trying to reach a static website hosted by another nginx container, but I fail doing so - when I go to the URL, I get
502 Bad Gateway
I've tried adding the upstream section to my main nginx's config, but nothing changed (after every config change, I reload my main nginx service inside the container).
On the other hand, adding upstream is something I'd like to avoid if it's possible because spawning multiple different applications would require adding an upstream for each application - and that's much more work than I'd expect.
Here is my main nginx's configuration file:
events {
worker_connections 1024;
}
http {
server {
listen 80;
location /portainer/ {
proxy_pass http://portainer:9000/;
}
location /helicon/ {
proxy_pass http://helicon:8001/;
}
}
}
Here is how I start my main nginx container:
docker run -p 80:80 --name nginx -v /var/nginx/conf:/etc/nginx:ro --net=internal-net -d nginx
Here is my static website's nginx configuration file:
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name helicon;
root /var/www/html/helicon;
error_log /var/log/nginx/localhost.error.log;
access_log /var/log/nginx/localhost.access.log;
}
}
Here is how the docker-compose file to create and start that container:
version: '3.5'
services:
helicon:
build: .
image: helicon
ports:
- "127.0.0.1:8001:80"
container_name: helicon
networks:
- internal-net
networks:
internal-net:
external: true
I'm using internal-net network to keep all apps in the same network instead of deprecated --link option for docker run
When I go to http://my.server.ip.address/helicon I get 502. Then I check logs with docker logs nginx and there is an information
2018/06/24 11:15:28 [error] 848#848: *466 connect() failed (111: Connection refused) while connecting to upstream, client: Y.Y.Y.Y, server: , request: "GET /helicon/ HTTP/1.1", upstream: "http://172.18.0.2:8001/", host: "X.X.X.X"
The helicon container indeed has an IP address of 172.18.0.2.
What am I missing? Maybe my approach should be completely different from using networks?
Kind regards,
Daniel
To anyone coming across this page here is my little contribution to your understanding of docker networking.
I would like to illustrate with an example scenario.
We are running several contianers with docker-compose such as the following:
Docker container client
Docker container nginx reverse proxy
Docker container service1
Docker container service2
etc ...
to make sure you are setup correctly, make sure of the following:
All containers are on same network!
first run: "docker network ls" to find your network name for your stack
secondly run: "docker network inspect [your_stack_network_name]"
note that the ports you expose in docker-compose have nothing to do with nginx reverse proxying!
that means that any ports you exposed in your docker-compose file are available on your actual host machine i.e your latptop or pc via your browser BUT for proxying purposes you must point nginx to actual ports of your services.
A walkthrough:
lets say service1 runs inside container 1 on port 3000, you mapped port 8080 in your docker-compose file like so: "8080:3000", with this configuration on your local machine you can access the container via your browser on port 8080 (localhost:8080) BUT for nginx reverse proxy container, when trying to proxy to service1, port 8080 is not relevant! the dockerized nginx reverse-proxy will use docker DNS to map service1 to its ip within then docker network and look at it entirely independently from your local host.
to nginx reverse proxy inside docker network, service1 only listens on port 3000!!!
so make sure to point nginx to the correct port!
Solved. I was working on this for hours thinking it was a nginx config issue. I modified nginx.conf endlessly but couldn't fix it. I was getting 502 Bad Gateway and the error description was:
failed (111: Connection refused) while connecting to upstream
I was looking in the wrong place. It turns out that the http server in my index.js file was listening on the url 'localhost'.
httpServer.listen(PORT, 'localhost', async err => {
This works fine on your development machine, but when running inside a container it must be named the url of the container itself. My containers are networked, and in my case the container is named 'backend'
I changed the url from 'localhost' to 'backend' and everything works fine.
httpServer.listen(PORT, 'backend', async err => {
I was getting the same error. In docker-compose.yml my service port was mapped to a different port (e.g. 1234:8080) and I was using the mapped port number (1234) inside nginx.conf.
However, inside the docker network, the containers do not use their mapped port numbers. In order to solve this problem, I changed the proxy_pass statement to use the correct port number (8080).
To make it clear the working configuration is like this: (Check the used port number in nginx.conf!)
docker-compose.yml
version: '3.8'
services:
...
web1:
ports:
- 1234:8080
networks:
- net1
...
proxy:
image: nginx:latest
networks:
- net1
...
networks:
net1:
driver: bridge
nginx.conf
...
location /api
...
proxy_pass http://web1:8080/;
I must thank user8458126 for pointing me in the right direction.
For me, I had overwrote my default.conf nginx file and mistyped the destination for it in my docker file which told nginx not to be listening on the correct port, and instead defaulting to port 80.
Long story short, make sure you're overwriting to the correct path.
What I had:
COPY ./default.conf ./etc/nginx/default.conf
Correct:
COPY ./default.conf ./etc/nginx/conf.d/default.conf
Hope this saves someone a few hours of racking their brain.

Re-resolve of backend in nginx SNI docker swarm

I am using nginx to do TCP forwarding based on hostname as discussed here: Nginx TCP forwarding based on hostname
When the upstream containers are taken down for a short period of time (5 or so mins), and then brought back up, nginx doesn't seem to re-resolve them (continue to get 111: connection refused error).
I've attempted to put a resolver in the server block of the nginx config:
server {
listen 443;
resolver x.x.x.x valid=30s
proxy_pass $name;
ssl_preread on;
}
I still get the same behaviour with this in place.
Like BMitch says, you can scale to 0 to ensure DNS remains available to Nginx.
But really, if you're using nginx in Swarm, I recommend using a Swarm-aware proxy solution that dynamically updates nginx/haproxy config's based on Services that have the proper labels. In those cases, when the service was removed, the config would also be removed from the proxy. One's I've used include:
Traefik
Docker Flow Proxy

How to dockerize two applications talking to each other via a http server?

TL;DR
How can we setup a docker-compose environment so we can reach a container under multiple, custom defined aliases? (Or any alternative that solves our problem in an other fashion.)
Existing setup
We have two applications† (nodejs servers), each behind an HTTP reverse proxy (Nginx), that need to talk to each other. On localhost, configuring this is easy:
Add /etc/hosts entries for ServerA and ServerB:
127.0.0.1 server-a.testing
127.0.0.1 server-b.testing
Run ServerA on port e.g. 2001 and ServerB on port 2002
Configure two virtual hosts, reverse proxying to ServerA and ServerB:
server { # Forward all traffic for server-a.testing to localhost:2001
listen 80;
server_name server-a.testing;
location / {
proxy_pass http://localhost:2001;
}
}
server { # Forward all traffic for server-b.testing to localhost:2002
listen 80;
server_name server-b.testing;
location / {
proxy_pass http://localhost:2002;
}
}
This setup is great for testing: Both applications can communicate each other in a way that is very close to the production environment, e.g. request('https://server-b.testing', fn); and we can test how the HTTP server configuration interacts with our apps (e.g. TLS config, CORS headers, HTTP2 proxying).
Dockerize all the things!
We now want to move this setup to docker and docker-compose. The docker-compose.yaml that would work in theory is this:
nginx:
build: nginx
ports:
- "80:80"
links:
- server-a
- server-b
server-a:
build: serverA
ports:
- "2001:2001"
links:
- nginx:server-b.testing
server-b:
build: serverB
ports:
- "2002:2002"
links:
- nginx:server-a.testing
So when ServerA addresses http://server-b.testing it actually reaches the Nginx which reverse proxies it to ServerB. Unfortunately, circular dependencies are not possible with links. There are three typical solutions to this problems:
use ambassadors
use nameservers
use the brand new networking (--x-networking).
Neither of these work for us, because, for the virtual hosting to work, we need to be able to address the Nginx container under the name server-a.testing and server-b.testing. What can we do?
(†) Actually it's a little bit more complicated – four components and links – but that shouldn't make any difference to the solution:
testClient (-> Nginx) -> ServerA,
testClient (-> Nginx) -> ServerB,
ServerA (-> Nginx) -> ServerB,
testClient (-> Nginx) -> ServerC,
Try this:
Link you server-a and server-b container to nginx with --link server-a:server-a --link server-b:server-b
Update nginx conf file with
location /sa
proxy_pass http://server-a:2001
location /sb
proxy_pass http://server-a:2001
When you link two containers, docker adds "conatiner_name container_ip" to /etc/hosts file of the linking container. So, in this case, server-a and server-b is resolved to their respective container IPs via /etc/hosts file.
And you can access them from http://localhost/sa or http://localhost/sb

Resources