I have a project running on docker. I use Nginx reverse proxy to run my app.
All works fine but trying to personalize the server_name on nginx but couldn't figure out how.
Docker yml file
I've added server name to /etc/hosts by docker
version: "3"
services:
nginx:
container_name: nginx
volumes:
- ./nginx/logs/nginx:/var/log/nginx
build:
context: ./nginx
dockerfile: ./Dockerfile
depends_on:
- menu-app
ports:
- "80:80"
- "433:433"
extra_hosts:
- "www.qr-menu.loc:172.18.0.100"
- "www.qr-menu.loc:127.0.0.1"
networks:
default:
ipv4_address: 172.18.0.100
menu-app:
image: menu-app
container_name: menu-app
volumes:
- './menu-app/config:/var/www/config'
- './menu-app/core:/var/www/core'
- './menu-app/ecosystem.json:/var/www/ecosystem.json'
- './menu-app/tsconfig.json:/var/www/tsconfig.json'
- './menu-app/tsconfig-build.json:/var/www/tsconfig-build.json'
- "./menu-app/src:/var/www/src"
- "./menu-app/package.json:/var/www/package.json"
build:
context: .
dockerfile: menu-app/.docker/Dockerfile
tmpfs:
- /var/www/dist
ports:
- "3000:3000"
extra_hosts:
- "www.qr-menu.loc:127.0.0.1"
- "www.qr-menu.loc:172.18.0.100"
networks:
default:
ipam:
driver: default
config:
- subnet: 172.18.0.0/24
And I have Nginx conf
server_names_hash_bucket_size 1024;
upstream local_pwa {
server menu-app:3000;
keepalive 8;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name www.qr-menu.loc 172.18.0.100;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://local_pwa/;
}
}
but unfortunately, app runs on localhost instead of www.qr-menu.loc
I couldn't figure out how to change server_name on Nginx.
This is a really, really late answer. The server_name directive tells nginx which configuration block to use on receipt of a request. Also see: http://nginx.org/en/docs/http/server_names.html
I think the docker-compose extra_hosts directive might only work for domain-name resolution within the docker network. In other words, on your computer that's running docker the name "www.qr-menu.loc" is not available, but in a running docker container that name should be available.
Related
I have an existing NGINX server hosting 2 websites, one as standard and one on a node server. I want to run 3 docker containers as well on this.
All of the tutorials suggest running NGINX in a container, however this would conflict with my existing set up.
nodejs server, ports 3030:3030
mysql, ports 3360:3360
phpmyadmin, ports 8080:80
They run on localhost on my local machine fine, but I cant get NGINX on the remote server to host them.
I want to be able to access the node server at http://publicIP:3030
I have tried to follow this answer but NGINX is giving me 404 error when trying to access.
my nginx config is:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html index.php;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location /paragon/ {
proxy_pass http://localhost:3030/;
# proxy_set_header X-SRV paragon;
}
location /phpmyadmin {
proxy_pass http://localhost:8080/;
# proxy_set_header X-SRV phpmyadmin;
}
location /mysql {
proxy_pass http://localhost:3360/;
# proxy_set_header X-SRV mysql;
}
I have tried it with the X-SRV headers uncommented as well.
My docker-compose.yml config is:
services:
web:
container_name: paragon_web
build: .
command: npm run
depends_on:
- db
volumes:
- ./:/app
- /node_modules
networks:
- paragon_net
ports:
- "3030:3030"
db:
container_name: paragon_db
image: mysql:8.0
command:
--default-authentication-plugin=mysql_native_password
--init-file ./src/data/db_init.sql
restart: unless-stopped
volumes:
- ./src/data/db_init.sql:/docker-entrypoint-initdb.d/
- mysql-data:/var/lib/mysql
ports:
- "3360:3306"
expose:
- "3306"
environment:
MYSQL_DATABASE: paragon
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: admin
MYSQL_PASSWORD: paragon99
SERVICE_TAG: dev
SERVICE_NAME: paragon_db
networks:
- paragon_net
# volumes:
phpmyadmin:
container_name: sql_admin
image: phpmyadmin:5.2.0-apache
restart: always
depends_on:
- db
ports:
- "8090:80"
networks:
- paragon_net
networks:
paragon_net:
driver: bridge
The location of the new site on the server are at /var/www/newsite
I am trying to set up pgadmin with docker compose and nginx but there is something weird happened.
every time I enter the site, pgadmin will redirect to /browser and also replaces host to container name, which make me browsing https://pgadmin_container/browser,but sometimes I directly go to https://my_url.com/browser it works, is it bug or I am missing something?
here is the nginx config:
server {
listen 80;
server_name some_name;
limit_conn conn_limit_per_ip 10;
limit_req zone=req_limit_per_ip burst=10 nodelay;
location / {
resolver 127.0.0.11 valid=30s;
set $upstream_pgadmin pgadmin_container;
proxy_pass http://$upstream_pgadmin:80;
proxy_redirect off;
proxy_buffering off;
}
and here is the docker-compose contents:
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: someEmail
PGADMIN_DEFAULT_PASSWORD: somePassword
PGADMIN_CONFIG_SERVER_MODE: 'False'
volumes:
- ./pgadmin:/root/.pgadmin2
ports:
- "5050:80"
networks:
- shared
restart: unless-stopped
sorry for my bad English
I have a docker-compose.yml as follows setup at my root. For context, I have a Ghost CMS blog hosted on a Digital Ocean droplet. I want to install Commento using Docker (an open source commenting solution), but as I'm routing my traffic through Cloudflare DNS, I require SSL on both the server side and the frontend side.
However, I installed Ghost through Digital Ocean's one click Ghost setup, which configured nginx to be the reverse proxy for my site. Nginx is NOT in the container (installed on server). Nginx listens on port 80 and 443. When I try docker-compose up, it says the following error:
Error starting userland proxy: listen tcp 0.0.0.0:443: bind: address already in use
Traefik cannot listen on the same ports at nginx (which is not within the container, but installed on the server itself). How can I fix this problem, and have my commento server reverse proxied through SSL as well? My docker-compose is as below:
version: '3.7'
services:
proxy:
restart: always
image: traefik
command:
- "--api"
- "--entrypoints=Name:http Address::80 Redirect.EntryPoint:https"
- "--entrypoints=Name:https Address::443 TLS"
- "--defaultentrypoints=http,https"
- "--acme"
- "--acme.storage=/etc/traefik/acme/acme.json"
- "--acme.entryPoint=https"
- "--acme.httpChallenge.entryPoint=http"
- "--acme.onHostRule=true"
- "--acme.onDemand=false"
- "--acme.email=changeme#example.com" # TODO: Replace with your email address
- "--docker"
- "--docker.watch"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/acme:/etc/traefik/acme
networks:
- web
ports:
- "80:80"
- "443:443"
labels:
- "traefik.enable=false"
server:
image: registry.gitlab.com/commento/commento:latest
ports:
- 8080:8080
environment:
COMMENTO_ORIGIN: https://commento.example.com # TODO: Replace commento.example.com with your domami$ COMMENTO_PORT: 8080
COMMENTO_POSTGRES: postgres://postgres:passwordexample#db:5432/commento?s$
depends_on:
- db
networks:
- db_network
- web
db:
image: postgres
environment:
POSTGRES_DB: commento
POSTGRES_USER: postgres
POSTGRES_PASSWORD: examplepassword #TODO: Replace STRONG_PASSWORD with th$ networks:
- db_network
volumes:
- postgres_data_volume:/var/lib/postgresql/data
volumes:
postgres_data_volume:
networks:
web:
external
db_network:
Here is my nginx server config under available sites:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com;
root /var/www/ghost/system/nginx-root; # Used for acme.sh SSL verification (https://acme.sh)
ssl_certificate /etc/letsencrypt/example.com/fullchain.cer;
ssl_certificate_key /etc/letsencrypt/example.com/example.com.key;
include /etc/nginx/snippets/ssl-params.conf;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:2368;
}
location ~ /.well-known {
allow all;
}
client_max_body_size 50m;
}
Sorry, kind of new to this. Thank you!
docker-compose.yml
...
ports:
- "80:80"
- "443:443"
...
nginx/conf
...
listen 443 ssl http2;
listen [::]:443 ssl http2;
...
Nginx used HOST port 443, so you cannot reuse it on your docker-compose, you must another one that is free.
Trying to add varnish to nginx using a docker container name rather than IP address.
I've tried adding it directly set_real_ip_from site-varnish but that doesn't work.
Tried adding an upstream (below) and tried set_real_ip_from varnish_backend with no luck
upstream varnish_backend {
server site-varnish;
}
Any help would be appreciated. I've added below the current working conf for reference.
upstream fastcgi_backend {
server site-fpm;
}
server {
listen 80;
listen 443 ssl;
server_name localhost;
location = /ping {
set_real_ip_from 192.168.176.2;
real_ip_header X-Forwarded-For;
access_log off;
allow 127.0.0.1;
deny all;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_pass fastcgi_backend;
}
}
docker-compose.yml
version: "2"
services:
site-varnish:
build:
context: ./etc/varnish/
ports:
- 80
networks:
- frontend
site-web:
build:
context: ./etc/nginx/
volumes_from:
- site-appdata
env_file:
- ./global.env
restart: always
networks:
- backend
- frontend
site-fpm:
build:
context: ./etc/7.2-fpm/
ports:
- 9000
volumes_from:
- site-appdata
env_file:
- ./global.env
networks:
- backend
site-appdata:
image: tianon/true
volumes:
- ./html:/var/www/html
networks:
frontend:
external:
name: webproxy
backend:
external:
name: backbone
I've updated the nginx version based upon #LinPy suggestion, to > 1.13.1 and am able to use set_real_ip_from site-varnish directly inside my conf.
I'm having trouble creating a reverse proxy and having it point at apps that are in other containers.
What I have now is a docker-compose for Nginx, and then I want to have separate docker-containers for several different apps and have Nginx direct traffic to those apps.
My Nginx docker-compose is:
version: "3"
services:
nginx:
image: nginx:alpine
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
My default.conf is:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
listen 80;
server_name www.mydomain.com;
location /confluence {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://192.168.1.50:8090/confluence;
}
}
I can access confluence directly at: http://192.168.1.50:8090/confluence
My compose for confluence is:
version: "3"
services:
db:
image: postgres:9.6
container_name: pg_confluence
env_file:
- env.list
ports:
- "5434:5432"
volumes:
- ./pg_conf.sql:/docker-entrypoint-initdb.d/pg_conf.sql
- dbdata:/var/lib/postgresql/data
confluence:
image: my_custom_image/confluence:6.11.0
container_name: confluence
volumes:
- confluencedata:/var/atlassian/application-data/confluence
- ./server.xml:/opt/atlassian/confluence/conf/server.xml
environment:
- JVM_MAXIMUM_MEMORY=2g
ports:
- "8090:8090"
depends_on:
- db
volumes:
confluencedata:
dbdata:
I am able to see the Nginx "Welcome" screen when I hit mydomain.com but if I hit mydomain.com/confluence it gives a not found.
So it looks like Nginx is running, just not sending the traffic to the other container properly.
========================
=== Update With Solution ===
========================
I ended up switching to Traefik instead of Nginx. When I take the next step and start learning k8s this will help as well.
Although these network settings are what you need even if you stick with Nginx, I just didn't test them against Nginx, so hopefully they are helpful no matter which one you end up using.
For the confluence docker-compose.yml I added:
networks:
proxy:
external: true
internal:
external: false
services:
confluence:
...
networks:
- internal
- proxy
db:
...
networks:
- internal
And for the traefik docker-compose.yml I added:
networks:
proxy:
external: true
services:
reverse-proxy:
networks:
- proxy
I had to create the network manually with:
docker network create proxy
It is not really how to use docker the correct way.
If you are in a production environment, use a real orchestration tools (nowaday Kubernetes is the way to go)
If you are on you computer, you can reference a name of a container (or an alias) only if you use the same network AND this network is not the default one.
A way is to have only one docker-compose file.
Another way is to use the same network across your docker-compose.
Create a network docker network create --driver bridge my_network
use it on each docker-compose you have:
networks:
default:
external:
name: my_network