I'm trying to use docker-compose to create dynamic and fast development environment and I want to use nginx to route all services. This is my configuration:
docker-compose.yml
version: '3.1'
services:
nginx:
image: nginx
ports:
- 80:80
volumes:
- ./nginx:/etc/nginx/conf.d
wordpress:
image: wordpress
restart: always
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
volumes:
- ./wordpress:/var/www/html
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_RANDOM_ROOT_PASSWORD: '1'
volumes:
- ./db:/var/lib/mysql
nginx conf.d
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://wordpress:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
But it doesn't work, it is always trying to move from http://localhost to http://localhost:8080
What should I do?
Here are the main issues to address in your sample code:
Both nginx and wordpress Docker images listen on port 80 by default. So you should map wordpress to a different one. For example 8080
All the images will not be able to see each other unless you set up a network for them.
Update nginx configuration to remove the port for wordpress. Being in the same network they see each other usin their host names only (so their image name)
Had to change the way you declare the volumes used by wordpress and mysql images
So this is what I suggest to have:
docker-compose
version: '3.1'
services:
nginx:
image: nginx
ports:
- 80:80
volumes:
- ./nginx:/etc/nginx/conf.d
networks:
- backend
wordpress:
image: wordpress
ports:
- 8080:80
restart: always
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
volumes:
- wordpress:/var/www/html
networks:
- backend
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_RANDOM_ROOT_PASSWORD: '1'
volumes:
- db:/var/lib/mysql
networks:
- backend
volumes:
wordpress:
db:
networks:
backend:
driver: bridge
nginx.conf
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://wordpress/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
You can check more details about networking in Docker Compose in the documentation.
Related
This is sort of a follow up question to this question.
Originally, I tried to get Keycloak to work in Docker and needed TLS, so I used nginx with docker compose. But I got an infinite spinner like people in the question, which I found via Google when trying to solve my problem. So I read in answers that people in the question said not to KC_HOSTNAME_PORT. So I tried this and indeed, it worked with port 443.
That is fine and good, but I want to get Keycloak to work in my setup with different ports such as 8443. Can someone explain how to do this based on the setup offered in the original question I referred to? Or post a complete example with a docker-compose.yml of how to do it with nginx or traefik?
EDIT: If it helps, here is my docker-compose.yml:
version: '3'
services:
keycloak:
image: quay.io/keycloak/keycloak:19.0.2
container_name: keycloak
environment:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
PROXY_ADDRESS_FORWARDING: 'true'
KC_HOSTNAME_STRICT: 'false'
KC_HTTP_ENABLED: 'true'
KC_PROXY: 'edge'
# more
KC_PROXY_ADDRESS_FORWARDING: "true"
KC_HOSTNAME: kvm1.home
#KC_HOSTNAME_PORT: 4443
ports:
- "8080:8080"
command:
- start-dev
- "--proxy=edge"
- "--hostname-strict-https=false"
nginx:
image: nginx:1.23.1
container_name: nginx
volumes:
- ./templates:/etc/nginx/templates
ports:
#- "8000:80"
#- "4443:443"
- "80:80"
- "443:443"
environment:
- NGINX_HOST=localhost
- NGINX_PORT=80
volumes:
- ./ssl:/etc/nginx/ssl
- ./sites-enabled:/etc/nginx/sites-enabled
- ./nginx.conf:/etc/nginx/nginx.conf:rw
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
# include snippets/snakeoil.conf;
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name kvm1.home;
location / {
proxy_pass http://kvm1.home:8080/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
If I used the uncommented KC_HOSTNAME_PORT and the uncommented different ports in nginx.environment, I get the infinite spinner.
If you want keycloak to expose keycloak on a different port, you need to make two changes:
Change the port on which you're publishing web-secure endpoint from Traefik
Set KC_HOSTNAME_PORT to match the new port
So that gets us:
version: "3"
services:
traefik:
image: docker.io/traefik
command:
- --api.insecure=true
- --providers.docker
- --entrypoints.web.address=:80
- --entrypoints.web-secure.address=:443
ports:
- "127.0.0.1:8080:8080"
- "80:80"
- "8443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
keycloak:
image: quay.io/keycloak/keycloak
restart: always
command: start
environment:
KC_PROXY_ADDRESS_FORWARDING: "true"
KC_HOSTNAME_STRICT: "false"
KC_HOSTNAME: auth.example.com
KC_HOSTNAME_PORT: 8443
KC_PROXY: edge
KC_HTTP_ENABLED: "true"
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgres:5432/$POSTGRES_DB?ssl=allow
KC_DB_USERNAME: $POSTGRES_USER
KC_DB_PASSWORD: $POSTGRES_PASSWORD
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: password
labels:
- "traefik.http.routers.cloud-network-keycloak.rule=Host(`auth.example.com`)"
- "traefik.http.routers.cloud-network-keycloak.tls=true"
- "traefik.http.services.cloud-network-keycloak.loadbalancer.server.port=8080"
postgres:
image: docker.io/postgres:14
environment:
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
POSTGRES_DB: $POSTGRES_DB
With this configuration, and an appropriate entry in my local /etc/hosts, file, I can access keycloak at https://auth.example.com:8443.
I have an existing NGINX server hosting 2 websites, one as standard and one on a node server. I want to run 3 docker containers as well on this.
All of the tutorials suggest running NGINX in a container, however this would conflict with my existing set up.
nodejs server, ports 3030:3030
mysql, ports 3360:3360
phpmyadmin, ports 8080:80
They run on localhost on my local machine fine, but I cant get NGINX on the remote server to host them.
I want to be able to access the node server at http://publicIP:3030
I have tried to follow this answer but NGINX is giving me 404 error when trying to access.
my nginx config is:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html index.php;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location /paragon/ {
proxy_pass http://localhost:3030/;
# proxy_set_header X-SRV paragon;
}
location /phpmyadmin {
proxy_pass http://localhost:8080/;
# proxy_set_header X-SRV phpmyadmin;
}
location /mysql {
proxy_pass http://localhost:3360/;
# proxy_set_header X-SRV mysql;
}
I have tried it with the X-SRV headers uncommented as well.
My docker-compose.yml config is:
services:
web:
container_name: paragon_web
build: .
command: npm run
depends_on:
- db
volumes:
- ./:/app
- /node_modules
networks:
- paragon_net
ports:
- "3030:3030"
db:
container_name: paragon_db
image: mysql:8.0
command:
--default-authentication-plugin=mysql_native_password
--init-file ./src/data/db_init.sql
restart: unless-stopped
volumes:
- ./src/data/db_init.sql:/docker-entrypoint-initdb.d/
- mysql-data:/var/lib/mysql
ports:
- "3360:3306"
expose:
- "3306"
environment:
MYSQL_DATABASE: paragon
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: admin
MYSQL_PASSWORD: paragon99
SERVICE_TAG: dev
SERVICE_NAME: paragon_db
networks:
- paragon_net
# volumes:
phpmyadmin:
container_name: sql_admin
image: phpmyadmin:5.2.0-apache
restart: always
depends_on:
- db
ports:
- "8090:80"
networks:
- paragon_net
networks:
paragon_net:
driver: bridge
The location of the new site on the server are at /var/www/newsite
I'm trying to get phpmyadmin to work on my live server in Vultr. I have a full-stack react app for the front-end and express Node Js for the back-end as well as mysql for database and phpmyadmin to create tables and stuff. Both React app and Express Node js work, but phpmyadmin doesn't work.
Below is my docker-compose file:
version: '3.7'
services:
mysql_db:
image: mysql
container_name: mysql_container
restart: always
cap_add:
- SYS_NICE
volumes:
- ./data:/var/lib/mysql
ports:
- "3306:3306"
env_file:
- .env
environment:
MYSQL_ROOT_PASSWORD: "${MYSQL_ROOT_PASSWORD}"
MYSQL_HOST: "${MYSQL_HOST}"
MYSQL_DATABASE: "${MYSQL_DATABASE}"
MYSQL_USER: "${MYSQL_USER}"
MYSQL_PASSWORD: "${MYSQL_PASSWORD}"
networks:
- react_network
phpmyadmin:
depends_on:
- mysql_db
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin_container
restart: always
ports:
- "8080:80"
env_file:
- .env
environment:
- PMA_HOST=mysql_db
- PMA_PORT=3306
- PMA_ABSOLUTE_URI=https://my-site.com/admin
- PMA_ARBITRARY=1
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
networks:
- react_network
api:
restart: always
image: mycustomimage
ports:
- "3001:80"
container_name: server_container
env_file:
- .env
depends_on:
- mysql_db
environment:
MYSQL_HOST_IP: mysql_db
networks:
- react_network
client:
image: mycustomimage
ports:
- "3000:80"
restart: always
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
container_name: client_container
networks:
- react_network
nginx:
depends_on:
- api
- client
build: ./nginx
container_name: nginx_container
restart: always
ports:
- "443:443"
- "80"
volumes:
- ./nginx/conf/certificate.crt:/etc/ssl/certificate.crt:ro
- ./nginx/certs/private.key:/etc/ssl/private.key:ro
- ./nginx/html:/usr/share/nginx/html
networks:
- react_network
volumes:
data:
conf:
certs:
webconf:
html:
networks:
react_network:
Below is my nginx configuration file:
upstream client {
server client:3000;
}
upstream api {
server api:3001;
}
server {
listen 443 ssl http2;
server_name my-site.com;
ssl_certificate /etc/ssl/certificate.crt;
ssl_certificate_key /etc/ssl/private.key;
location / {
proxy_pass http://client;
}
location /admin {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://phpmyadmin:8080;
}
location /sockjs-node {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
server {
listen 80;
server_name my-site.com www.my-site.com;
return 301 https://my-site.com$request_uri;
}
I honestly don't know what I'm missing here! If anyone can help me please!!
I get a 502 Bad Gateway error!
I'm trying to deploy a stack with docker.
Here is how my stack works:
nginx-proxy (redirect user requests to the good container)
website (simple nginx serving a website)
api (django application, launch with gunicorn)
nginx-api (serving static files and uploaded files and redirect to the API container if it is an endpoint)
This is my docker-compose.yml:
version: '3.2'
services:
website:
container_name: nyl2pronos-website
image: nyl2pronos-website
restart: always
build:
context: nyl2pronos_webapp
dockerfile: Dockerfile
volumes:
- ./logs/nginx-website:/var/log/nginx
expose:
- "80"
deploy:
replicas: 10
update_config:
parallelism: 5
delay: 10s
api:
container_name: nyl2pronos-api
build:
context: nyl2pronos_api
dockerfile: Dockerfile
image: nyl2pronos-api
restart: always
ports:
- 8001:80
expose:
- "80"
depends_on:
- db
- memcached
environment:
- DJANGO_PRODUCTION=1
volumes:
- ./data/api/uploads:/code/uploads
- ./data/api/static:/code/static
nginx-api:
image: nginx:latest
container_name: nyl2pronos-nginx-api
restart: always
expose:
- "80"
volumes:
- ./data/api/uploads:/uploads
- ./data/api/static:/static
- ./nyl2pronos_api/config:/etc/nginx/conf.d
- ./logs/nginx-api:/var/log/nginx
depends_on:
- api
nginx-proxy:
image: nginx:latest
container_name: nyl2pronos-proxy
restart: always
ports:
- 80:80
- 443:443
volumes:
- ./proxy:/etc/nginx/conf.d
- /etc/letsencrypt:/etc/letsencrypt
- ./logs/nginx-proxy:/var/log/nginx
deploy:
placement:
constraints: [node.role == manager]
depends_on:
- nginx-api
- website
When I use docker-compose up everything works fine.
But when I try to deploy with docker stack deploy --compose-file=docker-compose.yml prod. My nginx config files can't find the different upstreams.
This is the error provided by my service nginx-api:
2019/03/23 17:32:41 [emerg] 1#1: host not found in upstream "api" in /etc/nginx/conf.d/nginx.conf:2
See below my nginx.conf:
upstream docker-api {
server api;
}
server {
listen 80;
server_name xxxxxxxxxxxxxx;
location /static {
autoindex on;
alias /static/;
}
location /uploads {
autoindex on;
alias /uploads/;
}
location / {
proxy_pass http://docker-api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
If you see something wrong in my configuration or something I can do better, let me know!
this is happening because nginx-api service is up before the api service.
but i added the depends_on option?
you are right, and this option should work for a docker-compose up case.
but unfortunately not on docker stack deploy, or, as the docs put it:
The depends_on option is ignored when deploying a stack in swarm mode
with a version 3 Compose file.
ok, so what can i do now?
nothing. its actually not a bug-
docker swarm nodes (your stack services) are supposed to recover automatically on error. (thats why you define the restart: always option). so it should work for you anyway.
if you are using the compose file only for deploying the stack and not on a docker-compose up - you may remove the depends_on option completely, it means nothing to docker stack.
I got a web application running on Ruby on Rails with SOLR in docker-compose. It exposes port 3001, and I want a subdomain URL than my university possesses (and I have access to the configuration panel where I can only specify the "target", what is the IP, I guess, of my local server on which the web application is running).
I first tried to do this redirection without nginx, but the URL data.chembiosys.de was just redirected to http://static.ip:3001
The app is running though, and is accessible.
So I wanted to try to use nginx as a reverse proxy, but the effect is basically the same:
- I need to specify the port number and the IP of my server in the configuration panel of the domain name of interest
- when I type "data.chembiosys.de" in the browser, it shows the IP and the port number
What I do is that I first create a nginx-proxy network:
sudo docker network create nginx-proxy
Then I start nginx-proxy with docker-compose.yml:
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /home/myhome/Projects/nginx-proxy/conf/my_conf.conf:/etc/nginx/conf.d/my_proxy.conf:ro
whoami:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=whoami.local
networks:
default:
external:
name: nginx-proxy
In the second volume, I copy to the nginx-proxy container the following config file:
server {
listen 80;
server_name http://mystaticip:3001;
client_max_body_size 2G;
return 301 http://data.chembiosys.de$request_uri;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host data.chembiosys.de;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://mystaticip:3001;
}
}
And finally, I run the rails app docker-compose.yml
version: '3'
services:
db:
image: mysql:5.7
container_name: seek-mysql_cbs
restart: always
env_file:
- docker/db.env
volumes:
- seek-mysql-db_cbs:/var/lib/mysql
seek: # The SEEK application
#build: .
image: fairdom/seek:1.7
container_name: seek_cbs
command: docker/entrypoint.sh
restart: always
environment:
RAILS_ENV: production
SOLR_PORT: 8983
NO_ENTRYPOINT_WORKERS: 1
env_file:
- docker/db.env
volumes:
- seek-filestore_cbs:/seek/filestore
- seek-cache_cbs:/seek/tmp/cache
ports:
- "3001:3000"
depends_on:
- db
- solr
links:
- db
- solr
seek_workers: # The SEEK delayed job workers
#build: .
image: fairdom/seek:1.7
container_name: seek-workers_cbs
command: docker/start_workers.sh
restart: always
environment:
RAILS_ENV: production
SOLR_PORT: 8983
env_file:
- docker/db.env
volumes:
- seek-filestore_cbs:/seek/filestore
- seek-cache_cbs:/seek/tmp/cache
depends_on:
- db
- solr
links:
- db
- solr
solr:
image: fairdom/seek-solr
container_name: seek-solr_cbs
volumes:
- seek-solr-data_cbs:/opt/solr/server/solr/seek/data
restart: always
volumes:
seek-filestore_cbs:
driver: local-persist
driver_opts:
mountpoint: /home/myhome/Projects/ChemBioSys/docker_volumes/seek-filestore_cbs
seek-mysql-db_cbs:
driver: local-persist
driver_opts:
mountpoint: /home/myhome/Projects/ChemBioSys/docker_volumes/seek-mysql-db_cbs
seek-solr-data_cbs:
driver: local-persist
driver_opts:
mountpoint: /home/myhome/Projects/ChemBioSys/docker_volumes/seek-solr-data_cbs
seek-cache_cbs:
driver: local-persist
driver_opts:
mountpoint: /home/myhome/Projects/ChemBioSys/docker_volumes/seek-cache_cbs
networks:
default:
external:
name: nginx-proxy
I have the feeling that nginx-proxy is just failing to connect the URL to the app. What an I doing wrong and how to connect the app to the URL with nginx? And also, how to avoid the rewrite of the URL to the IP:port?
P.S. The static IP I got from the SysAdmins is alphanumerical and I see the following warning when the nginx-proxy docker-compose runs:
nginx-proxy | [warn] 30#30: server name "http://pc08.ian.uni-jena.de:3001" has suspicious symbols in /etc/nginx/conf.d/my_proxy.conf:3