I'm having trouble creating a reverse proxy and having it point at apps that are in other containers.
What I have now is a docker-compose for Nginx, and then I want to have separate docker-containers for several different apps and have Nginx direct traffic to those apps.
My Nginx docker-compose is:
version: "3"
services:
nginx:
image: nginx:alpine
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
My default.conf is:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
listen 80;
server_name www.mydomain.com;
location /confluence {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://192.168.1.50:8090/confluence;
}
}
I can access confluence directly at: http://192.168.1.50:8090/confluence
My compose for confluence is:
version: "3"
services:
db:
image: postgres:9.6
container_name: pg_confluence
env_file:
- env.list
ports:
- "5434:5432"
volumes:
- ./pg_conf.sql:/docker-entrypoint-initdb.d/pg_conf.sql
- dbdata:/var/lib/postgresql/data
confluence:
image: my_custom_image/confluence:6.11.0
container_name: confluence
volumes:
- confluencedata:/var/atlassian/application-data/confluence
- ./server.xml:/opt/atlassian/confluence/conf/server.xml
environment:
- JVM_MAXIMUM_MEMORY=2g
ports:
- "8090:8090"
depends_on:
- db
volumes:
confluencedata:
dbdata:
I am able to see the Nginx "Welcome" screen when I hit mydomain.com but if I hit mydomain.com/confluence it gives a not found.
So it looks like Nginx is running, just not sending the traffic to the other container properly.
========================
=== Update With Solution ===
========================
I ended up switching to Traefik instead of Nginx. When I take the next step and start learning k8s this will help as well.
Although these network settings are what you need even if you stick with Nginx, I just didn't test them against Nginx, so hopefully they are helpful no matter which one you end up using.
For the confluence docker-compose.yml I added:
networks:
proxy:
external: true
internal:
external: false
services:
confluence:
...
networks:
- internal
- proxy
db:
...
networks:
- internal
And for the traefik docker-compose.yml I added:
networks:
proxy:
external: true
services:
reverse-proxy:
networks:
- proxy
I had to create the network manually with:
docker network create proxy
It is not really how to use docker the correct way.
If you are in a production environment, use a real orchestration tools (nowaday Kubernetes is the way to go)
If you are on you computer, you can reference a name of a container (or an alias) only if you use the same network AND this network is not the default one.
A way is to have only one docker-compose file.
Another way is to use the same network across your docker-compose.
Create a network docker network create --driver bridge my_network
use it on each docker-compose you have:
networks:
default:
external:
name: my_network
Related
I have a project running on docker. I use Nginx reverse proxy to run my app.
All works fine but trying to personalize the server_name on nginx but couldn't figure out how.
Docker yml file
I've added server name to /etc/hosts by docker
version: "3"
services:
nginx:
container_name: nginx
volumes:
- ./nginx/logs/nginx:/var/log/nginx
build:
context: ./nginx
dockerfile: ./Dockerfile
depends_on:
- menu-app
ports:
- "80:80"
- "433:433"
extra_hosts:
- "www.qr-menu.loc:172.18.0.100"
- "www.qr-menu.loc:127.0.0.1"
networks:
default:
ipv4_address: 172.18.0.100
menu-app:
image: menu-app
container_name: menu-app
volumes:
- './menu-app/config:/var/www/config'
- './menu-app/core:/var/www/core'
- './menu-app/ecosystem.json:/var/www/ecosystem.json'
- './menu-app/tsconfig.json:/var/www/tsconfig.json'
- './menu-app/tsconfig-build.json:/var/www/tsconfig-build.json'
- "./menu-app/src:/var/www/src"
- "./menu-app/package.json:/var/www/package.json"
build:
context: .
dockerfile: menu-app/.docker/Dockerfile
tmpfs:
- /var/www/dist
ports:
- "3000:3000"
extra_hosts:
- "www.qr-menu.loc:127.0.0.1"
- "www.qr-menu.loc:172.18.0.100"
networks:
default:
ipam:
driver: default
config:
- subnet: 172.18.0.0/24
And I have Nginx conf
server_names_hash_bucket_size 1024;
upstream local_pwa {
server menu-app:3000;
keepalive 8;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name www.qr-menu.loc 172.18.0.100;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://local_pwa/;
}
}
but unfortunately, app runs on localhost instead of www.qr-menu.loc
I couldn't figure out how to change server_name on Nginx.
This is a really, really late answer. The server_name directive tells nginx which configuration block to use on receipt of a request. Also see: http://nginx.org/en/docs/http/server_names.html
I think the docker-compose extra_hosts directive might only work for domain-name resolution within the docker network. In other words, on your computer that's running docker the name "www.qr-menu.loc" is not available, but in a running docker container that name should be available.
I am quite new to Kubernetes and I have been struggling to migrate my current docker-compose environment to Kubernetes...
I converted my docker-compose.yml to Kubernetes manifests using kompose.
So far, I can access each pod individually but it seems like I have some issues to get those pods to communicate each other.. My Nginx pod can not access my app pod
My docker-compose.yml is something like below
version: '3.3'
services:
myapp:
image: my-app
build: ./docker/
restart: unless-stopped
container_name: app
stdin_open: true
volumes:
- mnt:/mnt
env_file:
- .env
mynginx:
image: nginx:latest
build: ./docker/
container_name: nginx
ports:
- 80:80
stdin_open: true
restart: unless-stopped
user: root
My Nginx.conf is something like below
server{
listen 80;
index index.html index.htm;
root /mnt/volumes/statics/;
location /myapp {
proxy_pass http://myapp/index;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
I understand that docker-compose enables containers to communicate each other through service names (myapp and mynginx in this case). Could somebody tell me what I need to do to achieve the same thing in Kubernetes?
Kompose did create services for me. It turned out that what I missed was docker-compose.overwrite file (apparently kompose just ignores overwrite.yml).
I'm trying to deploy a stack with docker.
Here is how my stack works:
nginx-proxy (redirect user requests to the good container)
website (simple nginx serving a website)
api (django application, launch with gunicorn)
nginx-api (serving static files and uploaded files and redirect to the API container if it is an endpoint)
This is my docker-compose.yml:
version: '3.2'
services:
website:
container_name: nyl2pronos-website
image: nyl2pronos-website
restart: always
build:
context: nyl2pronos_webapp
dockerfile: Dockerfile
volumes:
- ./logs/nginx-website:/var/log/nginx
expose:
- "80"
deploy:
replicas: 10
update_config:
parallelism: 5
delay: 10s
api:
container_name: nyl2pronos-api
build:
context: nyl2pronos_api
dockerfile: Dockerfile
image: nyl2pronos-api
restart: always
ports:
- 8001:80
expose:
- "80"
depends_on:
- db
- memcached
environment:
- DJANGO_PRODUCTION=1
volumes:
- ./data/api/uploads:/code/uploads
- ./data/api/static:/code/static
nginx-api:
image: nginx:latest
container_name: nyl2pronos-nginx-api
restart: always
expose:
- "80"
volumes:
- ./data/api/uploads:/uploads
- ./data/api/static:/static
- ./nyl2pronos_api/config:/etc/nginx/conf.d
- ./logs/nginx-api:/var/log/nginx
depends_on:
- api
nginx-proxy:
image: nginx:latest
container_name: nyl2pronos-proxy
restart: always
ports:
- 80:80
- 443:443
volumes:
- ./proxy:/etc/nginx/conf.d
- /etc/letsencrypt:/etc/letsencrypt
- ./logs/nginx-proxy:/var/log/nginx
deploy:
placement:
constraints: [node.role == manager]
depends_on:
- nginx-api
- website
When I use docker-compose up everything works fine.
But when I try to deploy with docker stack deploy --compose-file=docker-compose.yml prod. My nginx config files can't find the different upstreams.
This is the error provided by my service nginx-api:
2019/03/23 17:32:41 [emerg] 1#1: host not found in upstream "api" in /etc/nginx/conf.d/nginx.conf:2
See below my nginx.conf:
upstream docker-api {
server api;
}
server {
listen 80;
server_name xxxxxxxxxxxxxx;
location /static {
autoindex on;
alias /static/;
}
location /uploads {
autoindex on;
alias /uploads/;
}
location / {
proxy_pass http://docker-api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
If you see something wrong in my configuration or something I can do better, let me know!
this is happening because nginx-api service is up before the api service.
but i added the depends_on option?
you are right, and this option should work for a docker-compose up case.
but unfortunately not on docker stack deploy, or, as the docs put it:
The depends_on option is ignored when deploying a stack in swarm mode
with a version 3 Compose file.
ok, so what can i do now?
nothing. its actually not a bug-
docker swarm nodes (your stack services) are supposed to recover automatically on error. (thats why you define the restart: always option). so it should work for you anyway.
if you are using the compose file only for deploying the stack and not on a docker-compose up - you may remove the depends_on option completely, it means nothing to docker stack.
My docker-compose.yaml is
version: '3'
services:
nginx:
restart: always
build: ./nginx/
depends_on:
- web
ports:
- "8000:8000"
network_mode: "host" # Connection between containers
web:
build: .
image: app-image
ports:
- "80:80"
volumes:
- .:/app-name
command: uwsgi /app-path/web/app.ini
NGINX conf file is
upstream web {
server 0.0.0.0:80;
}
server {
listen 8000;
server_name web;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
alias "/app-static/";
}
location / {
proxy_pass http://web;
}
}
So basically I have Django and uWSGI in one container 'web' and NGINX in container 'nginx'. I linked both using NGINX via Proxy and both worked fine. (I somehow needed 'network_mode: "host"' without that didn't work)
Since they are different containers, I cannot use .sock file (Unless I use some volume hacks to share the .sock file which is not good!)
Even though this works, I have been asked to avoid using NGINX via proxy, so is there any other way to connect these two?
Searching didn't get me alternatives. I tried
I use multiple docker-compose files :
one for running on the same network : postgres and nginx
=> this containers collection is supposed to be always running
one for each asp core web site (each one on a specific port)
=> this containers are updated through a CI/CD pipeline (VSTS)
Because Nginx needs to know the hostname when defining the upstream, if the asp core container is not running then it's hostname is not known, then nginx throws an error on docker-compose up command :
nginx | 2018/01/04 15:59:17 [emerg] 1#1: host not found in upstream
"webportalstage:5001" in /etc/nginx/nginx.conf:9
nginx | nginx: [emerg] host not found in upstream
"webportalstage:5001" in /etc/nginx/nginx.conf:9
nginx exited with code 1
And obviously if the asp core container is running before, then nginx knows the hostname webportalstage and everything works fine. But the starting sequence is not what I expect.
is there any solution to start nginx with a not yet known hostname in the upstream ?
Here is my nginx.conf file :
worker_processes 4;
events { worker_connections 1024; }
http {
sendfile on;
upstream webportalstage {
server webportalstage:5001;
}
server {
listen 80;
location / {
proxy_pass http://webportalstage;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
And both docker-compose files :
Nginx + Postgres :
version: "3"
services:
proxy:
image: myPrivateRepo:latest
ports:
- "80:80"
container_name: nginx
networks:
aspcore:
aliases:
- nginx
postgres:
image: postgres:latest
environment:
- POSTGRES_PASSWORD=myPWD
- POSTGRES_USER=postgres
ports:
- "5432:5432"
container_name: postgres
networks:
aspcore:
aliases:
- postgres
networks:
aspcore:
driver: bridge
One of my asp core web site :
version: "3"
services:
webportal:
image: myPrivateRepo:latest
environment:
- ASPNETCORE_ENVIRONMENT=Staging
container_name: webportal
networks:
common_aspcore:
aliases:
- webportal
networks:
common_aspcore:
external: true
Well, I use the following hack in similar situation:
location / {
set $docker_host "webportalstage";
proxy_pass http://$docker_host:5001;
...
}
I'm not sure if it works with upstream, probably it should.
I know, this is not the best solution, but I didn't find any better.
I finally used extra_host feature to define static IP within my nginx+postgres docker-compose.yml file :
extra_hosts:
webportalstage: 10.5.0.20
And setting the same static IP to my asp core docker-compose file.
It works but it's not as generic as I would like