Nginx upstream doesn't work with docker deploy stack - docker

I'm trying to deploy a stack with docker.
Here is how my stack works:
nginx-proxy (redirect user requests to the good container)
website (simple nginx serving a website)
api (django application, launch with gunicorn)
nginx-api (serving static files and uploaded files and redirect to the API container if it is an endpoint)
This is my docker-compose.yml:
version: '3.2'
services:
website:
container_name: nyl2pronos-website
image: nyl2pronos-website
restart: always
build:
context: nyl2pronos_webapp
dockerfile: Dockerfile
volumes:
- ./logs/nginx-website:/var/log/nginx
expose:
- "80"
deploy:
replicas: 10
update_config:
parallelism: 5
delay: 10s
api:
container_name: nyl2pronos-api
build:
context: nyl2pronos_api
dockerfile: Dockerfile
image: nyl2pronos-api
restart: always
ports:
- 8001:80
expose:
- "80"
depends_on:
- db
- memcached
environment:
- DJANGO_PRODUCTION=1
volumes:
- ./data/api/uploads:/code/uploads
- ./data/api/static:/code/static
nginx-api:
image: nginx:latest
container_name: nyl2pronos-nginx-api
restart: always
expose:
- "80"
volumes:
- ./data/api/uploads:/uploads
- ./data/api/static:/static
- ./nyl2pronos_api/config:/etc/nginx/conf.d
- ./logs/nginx-api:/var/log/nginx
depends_on:
- api
nginx-proxy:
image: nginx:latest
container_name: nyl2pronos-proxy
restart: always
ports:
- 80:80
- 443:443
volumes:
- ./proxy:/etc/nginx/conf.d
- /etc/letsencrypt:/etc/letsencrypt
- ./logs/nginx-proxy:/var/log/nginx
deploy:
placement:
constraints: [node.role == manager]
depends_on:
- nginx-api
- website
When I use docker-compose up everything works fine.
But when I try to deploy with docker stack deploy --compose-file=docker-compose.yml prod. My nginx config files can't find the different upstreams.
This is the error provided by my service nginx-api:
2019/03/23 17:32:41 [emerg] 1#1: host not found in upstream "api" in /etc/nginx/conf.d/nginx.conf:2
See below my nginx.conf:
upstream docker-api {
server api;
}
server {
listen 80;
server_name xxxxxxxxxxxxxx;
location /static {
autoindex on;
alias /static/;
}
location /uploads {
autoindex on;
alias /uploads/;
}
location / {
proxy_pass http://docker-api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
If you see something wrong in my configuration or something I can do better, let me know!

this is happening because nginx-api service is up before the api service.
but i added the depends_on option?
you are right, and this option should work for a docker-compose up case.
but unfortunately not on docker stack deploy, or, as the docs put it:
The depends_on option is ignored when deploying a stack in swarm mode
with a version 3 Compose file.
ok, so what can i do now?
nothing. its actually not a bug-
docker swarm nodes (your stack services) are supposed to recover automatically on error. (thats why you define the restart: always option). so it should work for you anyway.
if you are using the compose file only for deploying the stack and not on a docker-compose up - you may remove the depends_on option completely, it means nothing to docker stack.

Related

Rewriting nginx.conf when converting docker-compose to Kubernetes using Kompose?

I am quite new to Kubernetes and I have been struggling to migrate my current docker-compose environment to Kubernetes...
I converted my docker-compose.yml to Kubernetes manifests using kompose.
So far, I can access each pod individually but it seems like I have some issues to get those pods to communicate each other.. My Nginx pod can not access my app pod
My docker-compose.yml is something like below
version: '3.3'
services:
myapp:
image: my-app
build: ./docker/
restart: unless-stopped
container_name: app
stdin_open: true
volumes:
- mnt:/mnt
env_file:
- .env
mynginx:
image: nginx:latest
build: ./docker/
container_name: nginx
ports:
- 80:80
stdin_open: true
restart: unless-stopped
user: root
My Nginx.conf is something like below
server{
listen 80;
index index.html index.htm;
root /mnt/volumes/statics/;
location /myapp {
proxy_pass http://myapp/index;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
I understand that docker-compose enables containers to communicate each other through service names (myapp and mynginx in this case). Could somebody tell me what I need to do to achieve the same thing in Kubernetes?
Kompose did create services for me. It turned out that what I missed was docker-compose.overwrite file (apparently kompose just ignores overwrite.yml).

Nginx deployed in docker container doesn't expose nuxtjs deployed in another docker container (502 Bad Gateway)

I'm trying to run nuxtjs application using nginx as proxy server in docker containers. So, I have 2 containers: nginx and nuxt
here is how I'm building nuxt application
FROM node:11.15
ENV APP_ROOT /src
RUN mkdir ${APP_ROOT}
WORKDIR ${APP_ROOT}
ADD . ${APP_ROOT}
RUN npm install
RUN npm run build
ENV host 0.0.0.0
The result seems to be fine
Next is nginx config
server {
listen 80;
server_name dev.iceik.com.ua;
location / {
proxy_pass http://nuxt:3000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Also I've tried this nginx config
upstream nuxt {
server nuxt:3000;
}
server {
listen 80;
server_name dev.iceik.com.ua;
location / {
proxy_pass http://nuxt;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
And finally my docker-compose file
version: "3"
services:
nuxt:
build: ./app/
container_name: nuxt
restart: always
ports:
- "3000:3000"
command:
"npm run start"
nginx:
image: nginx:1.17
container_name: nginx
ports:
- "80:80"
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- nuxt
I can ping nuxt container from nginx container
Also here are opened ports
So, the expected result is that I can access my nuxt application.
However I'm getting 502 Bad Gateway
Do you have any ideas why nginx doesn't expose my nuxt application?
Thank you for any suggestions!
Nodejs is exposed localhost:3000 instead of 0.0.0.0:3000
Please correct it. It will work
Always good that your containers put into a network if they need to talk each other, other way is to use Host network(only works in linux). Try below docker-compose.yml they should be able to talk each other from the container names.
version: "3"
services:
nuxt:
build: ./app/
container_name: nuxt
restart: always
ports:
- "3000:3000"
command:
"npm run start"
networks:
- my_net
nginx:
image: nginx:1.17
container_name: nginx
ports:
- "80:80"
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- nuxt
networks:
- my_net
networks:
my_net:
driver: "bridge"

NGINX and Docker-Compose: host not found in upstream

I'm trying to get docker-compose to run an NGINX reverse-proxy and I'm running into an issue. I know that what I am attempting appears possible as it is outlined here:
https://dev.to/domysee/setting-up-a-reverse-proxy-with-nginx-and-docker-compose-29jg
and here:
https://www.digitalocean.com/community/tutorials/how-to-secure-a-containerized-node-js-application-with-nginx-let-s-encrypt-and-docker-compose#step-2-%E2%80%94-defining-the-web-server-configuration
My application is very simple - it has a front end and a back end (nextjs and nodejs), which I've put in docker-compose along with an nginx instance.
Here is the docker-compose file:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:8000"
container_name: nodejs
restart: unless-stopped
nextjs:
build:
context: ../.
dockerfile: Dockerfile
ports:
- "3000:3000"
container_name: nextjs
restart: unless-stopped
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
depends_on:
- nodejs
- nextjs
networks:
- app-network
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /
o: bind
networks:
app-network:
driver: bridge
And here is the nginx file:
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name patientplatypus.com www.patientplatypus.com localhost;
location /back {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://nodejs:8000;
}
location / {
proxy_pass http://nextjs:3000;
}
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}
Both of these are very similar to the digitalOcean example and I can't think of how they would be different enough to cause errors. I run it with a simple docker-compose up -d --build.
When I go to localhost:80 I get page could not be found, and here is the result of my docker logs -
patientplatypus:~/Documents/patientplatypus.com/forum/back:10:03:32$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9c2e4e25e6d9 nginx:mainline-alpine "nginx -g 'daemon of…" 2 minutes ago Restarting (1) 14 seconds ago webserver
213e73495381 back_nodejs "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:8000->8000/tcp nodejs
03b6ae8f0ad4 back_nextjs "npm start" 2 minutes ago Up 2 minutes 0.0.0.0:3000->3000/tcp nextjs
patientplatypus:~/Documents/patientplatypus.com/forum/back:10:05:41$docker logs 9c2e4e25e6d9
2019/04/10 15:03:32 [emerg] 1#1: host not found in upstream "nodejs" in /etc/nginx/conf.d/nginx.conf:20
I'm pretty lost as to what could be going wrong. If anyone has any ideas please let me know. Thank you.
EDIT: SEE SOLUTION BELOW
The nginx webserver is on the network app-network which is a different network than the other two services which don't have a network defined. When no network is defined docker-compose will create a default network for them to share.
Either copy the network setting to both of the other services or remove the network setting from the webserver service.

Docker Nginx reverse proxy to separate container

I'm having trouble creating a reverse proxy and having it point at apps that are in other containers.
What I have now is a docker-compose for Nginx, and then I want to have separate docker-containers for several different apps and have Nginx direct traffic to those apps.
My Nginx docker-compose is:
version: "3"
services:
nginx:
image: nginx:alpine
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
My default.conf is:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
listen 80;
server_name www.mydomain.com;
location /confluence {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://192.168.1.50:8090/confluence;
}
}
I can access confluence directly at: http://192.168.1.50:8090/confluence
My compose for confluence is:
version: "3"
services:
db:
image: postgres:9.6
container_name: pg_confluence
env_file:
- env.list
ports:
- "5434:5432"
volumes:
- ./pg_conf.sql:/docker-entrypoint-initdb.d/pg_conf.sql
- dbdata:/var/lib/postgresql/data
confluence:
image: my_custom_image/confluence:6.11.0
container_name: confluence
volumes:
- confluencedata:/var/atlassian/application-data/confluence
- ./server.xml:/opt/atlassian/confluence/conf/server.xml
environment:
- JVM_MAXIMUM_MEMORY=2g
ports:
- "8090:8090"
depends_on:
- db
volumes:
confluencedata:
dbdata:
I am able to see the Nginx "Welcome" screen when I hit mydomain.com but if I hit mydomain.com/confluence it gives a not found.
So it looks like Nginx is running, just not sending the traffic to the other container properly.
========================
=== Update With Solution ===
========================
I ended up switching to Traefik instead of Nginx. When I take the next step and start learning k8s this will help as well.
Although these network settings are what you need even if you stick with Nginx, I just didn't test them against Nginx, so hopefully they are helpful no matter which one you end up using.
For the confluence docker-compose.yml I added:
networks:
proxy:
external: true
internal:
external: false
services:
confluence:
...
networks:
- internal
- proxy
db:
...
networks:
- internal
And for the traefik docker-compose.yml I added:
networks:
proxy:
external: true
services:
reverse-proxy:
networks:
- proxy
I had to create the network manually with:
docker network create proxy
It is not really how to use docker the correct way.
If you are in a production environment, use a real orchestration tools (nowaday Kubernetes is the way to go)
If you are on you computer, you can reference a name of a container (or an alias) only if you use the same network AND this network is not the default one.
A way is to have only one docker-compose file.
Another way is to use the same network across your docker-compose.
Create a network docker network create --driver bridge my_network
use it on each docker-compose you have:
networks:
default:
external:
name: my_network

Docker service redirection based on url path

I am using docker swarm and deploying 3 tomcat services each of the running on 8443 within the container and on 8444,8445,8446 on host containers.
I am looking to use a proxy server running on 8443 which will redirect the incoming request to the corresponding service based on the url path
https://hostname:8443/a – > https://hostname:8444/a
https://hostname:8443/b – > https://hostname:8445/b
https://hostname:8443/c – > https://hostname:8446/c
My sample Docker-compose file
version: "3"
services:
tomcat1 :
image: tomcat:1
ports:
- "8446:8443"
tomcat2 :
image: tomcat:1
ports:
- "8444:8443"
tomcat3 :
image: tomcat:1
ports:
- "8445:8443"
I have explored traeffik and nginx but was not able to find to re route based on url. Any suggestions.
You could use traefik based in rule with labels Host and Path
http://docs.traefik.io/basics/#frontends
Something like
version: "3"
services:
traefik:
image: traefik
command: --web --docker --docker.swarmmode --docker.watch --docker.domain=hostname
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
placement:
constraints: [node.role == manager]
restart_policy:
condition: on-failure
tomcat1:
image: tomcat:1
labels:
- traefik.backend=tomcat1
- traefik.frontend.rule=Host:hostname;PathPrefixStrip:/a
- traefik.port=8443
You can try the way I did it using nginx.
ON UBUNTU
Inside the /etc/nginx/sites-available you will find the default file.
Inside the server block add a new location block.
server {
listen 8443;
#this is a comment
location /a {
proxy_pass http://[::]:8444/.;
#i have commented these out because i don't know if you need them
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection keep-alive;
#proxy_set_header Host $host;
#proxy_cache_bypass $http_upgrade;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-Forwarded-Proto $scheme;
}
location /b {
proxy_pass http://[::]:8445/.;
}
location /c {
proxy_pass http://[::]:8446/.;
}
}

Resources