Ignore the service availability during start nginx in docker swarm - docker

The application makes up of serveral services in front of which we use nginx as the web server. And we deploy all these services inclulding nginx by in docker swarm.
docker-compose.yaml:
version: '3'
services:
sa:
image: xx.com/service-a
sb:
image: xx.com/service-b
sc:
image: xx.com/service-c
....
gateway:
image: nginx
volumes:
- /nginx.conf:/etc/nginx/conf.d/default.conf:ro
networks:
overlay:
nginx.conf:
location / {
proxy_pass http://sa;
}
location /sb/ {
proxy_pass http://sb;
}
location /sc/ {
proxy_pass http://sc;
}
......
So far so good, however when start the stack and if one of the service(say it is sc) fail to start, it will cause the nginx fail to start too which make our whole application unavailable.
Seems like that the embed-ed dns server by docker can not resolve the host sc since it is not started yet.
We do not want a single service affect the whole application, and sounds like this can be described as another question: "how to let nginx ignore the availability of the upstream/proxy during start". While after searching no solution. Any idea?

You can delay the dns resolution , until you need it .
So nginx can restart without doing dns resolution .
you must use a variable in your proxy_passdirective .
...
set $backend "http://serviceD" ;
proxy_pass $backend;
...
So i was able to simulate your problem with this docker-compose.yaml and th following default.conf
in the nginx image , the default resolver is pointing to 127.0.0.11
resolver 127.0.0.11 valid=30s;
resolver_timeout 5s;
server {
listen 0.0.0.0:80;
server_name localhost;
location / {
proxy_pass http://serviceA;
}
location /sd/ {
set $backend "http://serviceD" ;
proxy_pass $backend;
}
}
version: '3.7'
services:
serviceA:
image: debian:stretch-slim
command: ["sleep","3600" ]
serviceD:
image: debian:stretch-slim
command: ["sleep","3600" ]
nginx:
image: nginx
volumes:
- ${PWD}/default.conf:/etc/nginx/conf.d/default.conf:ro
command: ["/bin/sh","-c","exec nginx -g 'daemon off;'"]
restart: always
ports:
- target: 80
published: 8080
protocol: tcp
mode: host
testD:
image: alpine:latest
restart: always
command: ["/bin/sh","-c","( apk add --no-cache bind-tools && host serviceD && ping -c 8 -i 4 serviceD )" ]

Related

Deploying a Docker container with Nginx and FastAPI on Google Cloud Platform from SSH-term

I have a some issue when trying to deploy a simple FastAPI application with Nginx on Google Cloud Platform. In my case I should use SSH-terminal to run Docker container with Nginx and FastAPI. My nginx.conf configuration looks like:
access_log /var/log/nginx/app.log;
error_log /var/log/nginx/app.log;
server {
server_name example.com;
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /root/ssl/cert.pem;
ssl_certificate_key /root/ssl/key.pem;
location / {
proxy_pass "http://example.com:8004/";
}
}
And my docker-compose.yml looks like:
version: '3.8'
services:
nginx-proxy:
image: nginx
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx:/etc/nginx/conf.d
- ./ssl/cert1.pem:/root/ssl/cert.pem
- ./ssl/privkey1.pem:/root/ssl/key.pem
- ./ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem
web:
environment: [.env]
build: ./project
ports:
- 8004:8000
command: gunicorn main:app -k uvicorn.workers.UvicornWorker -w 2 -b 0.0.0.0:8000
volumes:
- ./project:/usr/src/app
networks:
default:
external:
name: nginx-proxy
Also, I have a Google Cloud VM instance with Firewall HTTP, HTTPS traffic On option, and additionally configured Firewall with rules allowed TCP connections over 443 and 80 ports (Domain name is provided by Google Cloud also, and redirects to VM's external IP address when I put it in my browser address field).
I run my docker-image from SSH-terminal with docker-compose up --build, then I get 502 Bad Gateway Nginx error in my browser (after going to example.com). I would like to know whether it is possible to run the docker image this way from inside SSH-terminal, as well as which steps did I miss to do it the right way?

Docker compose of nginx, express, letsencrypt SSL get 502 Bad gateway

I am trying to find a way to publish nginx, express, and letsencrypt's ssl all together using docker-compose. There are many documents about this, so I referenced these and tried to make my own configuration, I succeed to configure nginx + ssl from this https://medium.com/#pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
So now I want to put sample nodejs express app into nginx + ssl docker-compose. But I don't know why, I get 502 Bad Gateway from nginx rather than express's initial page.
I am testing this app with my left domain, and on aws ec2 ubuntu16. I think there is no problem about domain dns and security rules settings. All of 80, 443, 3000 ports opened already. and When I tested it without express app it shows well nginx default page.
nginx conf in /etc/nginx/conf.d
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
server_tokens off;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
docker-compose.yml
version: '3'
services:
app:
container_name: express
restart: always
build: .
ports:
- '3000:3000'
nginx:
container_name: nginx
image: nginx:1.15-alpine
restart: unless-stopped
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
Dockerfile of express
FROM node:12.2-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
I think SSL works fine, but there are some problems between express app and nginx. How can I fix this?
proxy_pass http://localhost:3000
is proxying the request to the 3000 port on the container that is running nginx. What you instead want is to connect to the 3000 port of the container running express. For that, we need to do two things.
First, we make the express container visible to nginx container at a predefined hostname. We can use links in docker-compose.
nginx:
links:
- "app:expressapp"
Alternatively, since links are now considered a legacy feature, a better way is to use a user defined network. Define a network of your own with
docker network create my-network
and then connect your containers to that network in compose file by adding the following at the top level:
networks:
default:
external:
name: my-network
All the services connected to a user defined network can access each other via name without explicitly setting up links.
Then in the nginx.conf, we proxy to the express container using that hostname:
location / {
proxy_pass http://app:3000
}
Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link.
Define networks in your docker-compose.yml and configure your services with the appropriate network:
version: '3'
services:
app:
restart: always
build: .
networks:
- backend
expose:
- "3000"
nginx:
image: nginx:1.15-alpine
restart: unless-stopped
depends_on:
- app
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
networks:
- frontend
- backend
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
networks:
frontend:
backend:
Note: the app service no longer publish's it's ports to the host it only exposes port 3000 (ref. exposing and publishing ports), it is only available to services connected to the backend network. The nginx service has a foot in both the backend and frontend network to accept incoming traffic from the frontend and proxy the connections to the app in the backend (ref. multi-host networking).
With user-defined networks you can resolve the service name:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream app {
server app:3000 max_fails=3;
}
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
server_tokens off;
location / {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
}
Removing the container_name from your services makes it possible to scale the services: docker-compose up -d --scale nginx=1 app=3 - nginx will load balance the traffic in round-robin to the 3 app containers.
I think maybe a source of confusion here is the way the "localhost" designation behaves among running services in docker-compose. The way docker-compose orchestrates your containers, each of the containers understands itself to be "localhost", so "localhost" does not refer to the host machine (and if I'm not mistaken, there is no way for a container running on the host to access a service exposed on a host port, apart from maybe some security exploits). To demonstrate:
services:
app:
container_name: express
restart: always
build: .
ports:
- '2999:3000' # expose app's port on host's 2999
Rebuild
docker-compose build
docker-compose up
Tell container running the express app to curl against its own running service on port 3000:
$ docker-compose exec app /bin/bash -c "curl http://localhost:3000"
<!DOCTYPE html>
<html>
<head>
<title>Express</title>
<link rel='stylesheet' href='/stylesheets/style.css' />
</head>
<body>
<h1>Express</h1>
<p>Welcome to Express</p>
</body>
</html>
Tell app to try to that same service which we exposed on port 2999 on the host machine:
$ docker-compose exec app /bin/bash -c "curl http://localhost:2999"
curl: (7) Failed to connect to localhost port 2999: Connection refused
We will of course see this same behavior between running containers as well, so in your setup nginx was trying to proxy it's own service running on localhost:3000 (but there wasn't one, as you know).
Tasks
build NodeJS app
add SSL functionality from the box (that can work automatically)
Solution
https://github.com/evertramos/docker-compose-letsencrypt-nginx-proxy-companion
/ {path_to_the_project} /Docker-compose.yml
version: '3.7'
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
restart: always
container_name: nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
- ./vhost.d:/etc/nginx/vhost.d
- ./html:/usr/share/nginx/html
- ./conf.d:/etc/nginx/conf.d
ports:
- "443:443"
- "80:80"
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: letsencrypt
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./certs:/etc/nginx/certs:rw
- ./vhost.d:/etc/nginx/vhost.d:rw
- ./html:/usr/share/nginx/html:rw
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
api:
container_name: ${APP_NAME}
build:
context: .
dockerfile: Dockerfile
command: npm start --port ${APP_PORT}
expose:
- ${APP_PORT}
# ports:
# - ${APP_PORT}:${APP_PORT}
restart: always
environment:
VIRTUAL_PORT: ${APP_PORT}
VIRTUAL_HOST: ${DOMAIN}
LETSENCRYPT_HOST: ${DOMAIN}
LETSENCRYPT_EMAIL: ${LETSENCRYPT_EMAIL}
NODE_ENV: production
PORT: ${APP_PORT}
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
/ {path_to_the_project} /.env
APP_NAME=best_api
APP_PORT=3000
DOMAIN=api.site.com
LETSENCRYPT_EMAIL=myemail#gmail.com
Do not forget to connect DOMAIN to you server before you will run container there.
How it works?
just run docker-compose up --build -d

Docker Compose: Django, uWSGI, NGINX without Proxy (different containers)

My docker-compose.yaml is
version: '3'
services:
nginx:
restart: always
build: ./nginx/
depends_on:
- web
ports:
- "8000:8000"
network_mode: "host" # Connection between containers
web:
build: .
image: app-image
ports:
- "80:80"
volumes:
- .:/app-name
command: uwsgi /app-path/web/app.ini
NGINX conf file is
upstream web {
server 0.0.0.0:80;
}
server {
listen 8000;
server_name web;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
alias "/app-static/";
}
location / {
proxy_pass http://web;
}
}
So basically I have Django and uWSGI in one container 'web' and NGINX in container 'nginx'. I linked both using NGINX via Proxy and both worked fine. (I somehow needed 'network_mode: "host"' without that didn't work)
Since they are different containers, I cannot use .sock file (Unless I use some volume hacks to share the .sock file which is not good!)
Even though this works, I have been asked to avoid using NGINX via proxy, so is there any other way to connect these two?
Searching didn't get me alternatives. I tried

Docker uWSGI - NGINX: uWSGI ok but NGINX is a 502

I configured my django-uwsgi-nginx using docker compose with the following files.
From browser "http://127.0.0.1:8000/" works fine and gives me the django default page
From browser "http://127.0.0.1:80" throws a 502 Bad Gateway
dravoka-docker.conf
upstream web {
server 0.0.0.0:8000;
}
server {
listen 80;
server_name web;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
alias "/dravoka-static/";
}
location / {
include uwsgi_params;
proxy_pass http://web;
}
}
nginx/Dockerfile
FROM nginx:latest
RUN echo "---------------------- I AM NGINX --------------------------"
RUN rm /etc/nginx/conf.d/default.conf
ADD sites-enabled/ /etc/nginx/conf.d
RUN nginx -t
web is just from "django-admin startproject web"
docker-compose.yaml
version: '3'
services:
nginx:
restart: always
build: ./nginx/
depends_on:
- web
ports:
- "80:80"
web:
build: .
image: dravoka-image
ports:
- "8000:8000"
volumes:
- .:/dravoka
command: uwsgi /dravoka/web/dravoka.ini
Dockerfile
# Ubuntu base image
FROM ubuntu:latest
# Some installs........
EXPOSE 80
When you say from the docker instance , you are running curl from with in the container ?? or you are running the curl command from your local ?
if you are running it from your local , update your docker-compose's web service to following
...
web:
build: .
image: dravoka-image
expose:
- "8000:8000"
volumes:
- .:/dravoka
command: uwsgi /dravoka/web/dravoka.ini
and try again.

How do I use a different port for an nginx proxy on docker?

I have an two containers running via docker-compose:
version: '3'
services:
web:
image: me/web
container_name: web
nginx:
image: me/nginx
container_name: nginx
ports:
- '80:80'
volumes:
- ../nginx:/etc/myapp/nginx
My nginx container copies in a new default.conf from the mapped volume from the entrypoint.sh:
#!/usr/bin/env bash
cp /etc/myapp/nginx/default.conf /etc/nginx/conf.d/default.conf
nginx -g 'daemon off;'
My custom default.conf looks like:
server {
listen 80;
server_name my.website.com;
location / {
proxy_pass http://web/;
}
}
With this configuration everything works as expected. After starting with docker-compose I can then navigate to http://my.website.com and access the web container instance properly.
However, now I want to change my port to something other than the default 80, such as 81 for example:
services:
..
nginx:
image: me/nginx
container_name: nginx
ports:
- '81:80'
..
Now this no longer works. Whenever I visit http://my.website.com:81 I get:
This site can’t be reached
my.website.com refused to connect.
ERR_CONNECTION_REFUSED
The other weird part is that if I use localhost rather than my.website.com, everything works just fine on port 81. Ex:
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://web/;
}
}
Navigating to http://localhost:81 works correctly.
What am I doing wrong here? How do I configure nginx to something other than localhost (my domain) and proxy on a different port than 80 to my web container?
Check that port 81 is open on my.website.com i.e. check firewall rules are in place etc

Resources