I’m having trouble with docker-compose and nginx. First, I have this docker-compose.yml:
services:
nginx:
build: ./nginx
ports:
- '8080:80'
depends_on:
- web
- api
web:
build: ./web
depends_on:
- api
api:
build: ./api
Both web (port 3000) and api (port 8000) are express servers that return WEB and API respectively. Now, inside ./nginx:
# Dockefile
FROM nginx:alpine
COPY ["default.conf", "/etc/nginx/conf.d/"]
EXPOSE 80
# default.conf
server {
location / {
proxy_pass http://web:3000;
}
location /api {
proxy_pass http://api:8000;
}
}
Now, when I go to http://localhost:8080, I get WEB, but when I go to http://localhost:8080/api, it redirects to http://localhost:1337/api/ and I get nothing (by the way, it throws a 304 error)
However, when I write this default.conf (put api in /)
# default.conf
server {
location / {
proxy_pass http://api:8000;
}
location /api {
proxy_pass http://web:3000;
}
}
I get the same result but in / I get API, so both servers work.
I don't know whether it helps you or not. I am using nginx docker image directly in my docker-compose.
for example.
docker-compose.yml
version: '3'
services:
jobsaf-server:
build:
context: .
dockerfile: Dockerfile.production
container_name: jobsaf-server
ports:
- "3000:3000"
- "5858:5858"
- "35729:35729"
- "6379:6379"
environment:
- NODE_ENV=production
networks:
- front-tier
- back-tier
depends_on:
- "redis"
- "mongo"
links:
- mongo
- redis
volumes:
- ./server:/var/www/app/jobsaf-website/server
nginx:
image: nginx:stable
depends_on:
- jobsaf-server
links:
- jobsaf-server
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
ports:
- "0.0.0.0:80:80"
mongo:
image: mongo:latest
container_name: mongo
volumes:
- "db-data:/data/db"
environment:
- MONGO_INITDB_ROOT_USERNAME=${DB_USER}
- MONGO_INITDB_ROOT_PASSWORD=${DB_PASS}
- MONGO_INITDB_DATABASE=admin
ports:
- "0.0.0.0:27017:27017"
networks:
- back-tier
redis:
image: redis
container_name: redis
networks:
- back-tier
volumes:
db-data:
# - /data/db
networks:
front-tier:
back-tier:
Related
I use a docker-compose file with 3 service (node-red, mosquitto and mongo) and and I want use nginx as load balancer of node-red service:
version: '2.2'
networks:
Platform-Network:
name: IoT-Network
volumes:
Platform:
MQTT-broker:
DataBase:
Nginx:
services:
Platform:
image: custom-node-red:latest
networks:
- Platform-Network
restart: always
volumes:
- ./node-red:/data
# depends_on:
# - Nginx
MQTT-broker:
container_name: mosquitto
image: eclipse-mosquitto
ports:
- "192.168.100.101:1883:1883"
networks:
- Platform-Network
restart: always
depends_on:
- Platform
volumes:
- ./mosquitto/config:/mosquitto/config
- ./mosquitto/data:/mosquitto/data
- ./mosquitto/log:/mosquitto/log
DataBase:
container_name: mongodb
image: mongo
ports:
- "8083:27017"
networks:
- Platform-Network
restart: always
depends_on:
- Platform
volumes:
- ./mongodb/data:/data/db
- ./mongodb/backup:/data/backup
- ./mongodb/mongod.conf:/etc/mongod.conf
- ./mongodb/log:/var/log/mongodb/
Nginx:
container_name: nginx
image: nginx
ports:
- "4000:4000"
depends_on:
- Platform
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
restart: always
Nginx config file is:
user nginx;
events {
worker_connections 1000;
}
http {
upstream platform {
server platform_1:1880;
server platform_2:1880;
server platform_3:1880;
}
server {
listen [::]:4000;
listen 4000;
location / {
proxy_pass http://platform;
}
}
}
I run my service with docker-compose -f platform.yaml up -d --scale Platform=3 and containers up as show in
this
.But as showen in above image, nginx dosnt up.
Nginx container log get this error and i can't resolve it.
Your nginx service also need to join the same network as Platform:
Nginx:
container_name: nginx
image: nginx
ports:
- "4000:4000"
depends_on:
- Platform
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
restart: always
networks: # add this
- Platform-Network
Context
Our solution send emailing campaign, and some email provider makes temporary blacklist because we used inexistant email addresses (they may be created/imported by any user)
Solution
I try to implement a email checker based on https://www.codexworld.com/verify-email-address-check-if-real-exists-domain-php/
Problem
From my local computer, this script works (when my IP is not temporary blacklisted) but from my Docker Container stream_socket_client or even telnet always times out.
As I see, MX servers always time out non verified requester but how can I make it work from my docker container?
I've no specific docker configuration for the port 25.
Thank you
docker-compose.yml
# Base docker-compose file to run required services: Adminer, MySQL, and Redis
version: '3.7'
services:
adminer:
image: adminer:latest
ports:
- "8080:8080"
database:
image: mysql:8.0
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
cap_add: [ SYS_NICE ] # https://github.com/docker-library/mysql/issues/303
command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_general_ci', '--lower_case_table_names=1']
redis:
image: redis:alpine
expose:
- "6379"
redisinsight:
image: redislabs/redisinsight
ports:
- "8081:8001"
reverse-proxy:
image: nginx:alpine
depends_on:
- backend
- frontend
volumes:
- ./../../../backend/infra/etc/nginx/dev.conf:/etc/nginx/nginx.conf
ports:
- "80:80"
backend:
build:
context: ./../..
dockerfile: container/dev/backend.dockerfile
expose:
- "80"
volumes:
- {...}
depends_on:
- database
frontend:
build:
context: ./../..
dockerfile: container/dev/frontend.dockerfile
expose:
- "3000"
volumes:
- {...}
tty: true # required to keep yarn running (https://stackoverflow.com/a/61050994)
volumes:
# Contains the database's data
db_data: {}
# Base docker-compose file to run required services: Adminer, MySQL, and Redis
version: '3.7'
services:
adminer:
image: adminer:latest
ports:
- "8080:8080"
database:
image: mysql:8.0
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
cap_add: [ SYS_NICE ] # https://github.com/docker-library/mysql/issues/303
command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_general_ci', '--lower_case_table_names=1']
redis:
image: redis:alpine
expose:
- "6379"
redisinsight:
image: redislabs/redisinsight
ports:
- "8081:8001"
reverse-proxy:
image: nginx:alpine
depends_on:
- backend
- frontend
volumes:
- ./../../../backend/infra/etc/nginx/dev.conf:/etc/nginx/nginx.conf
ports:
- "80:80"
backend:
build:
context: ./../..
dockerfile: container/dev/backend.dockerfile
expose:
- "80"
volumes:
- {...}
depends_on:
- database
frontend:
build:
context: ./../..
dockerfile: container/dev/frontend.dockerfile
expose:
- "3000"
volumes:
- {...}
tty: true # required to keep yarn running (https://stackoverflow.com/a/61050994)
volumes:
# Contains the database's data
db_data: {}
backend.dockerfile is an ubuntu with git, mysql-client, php
I did follow this tutorial : https://rafrasenberg.com/posts/docker-container-management-with-traefik-v2-and-portainer/ to build a treafik reverse proxy on my server
And I tested it with a very simple application that render a "hello world" in a index.html:
version: "3"
services:
app:
image: nginx
environment:
PORT: ${PORT}
volumes:
- .:/usr/share/nginx/html/
networks:
- proxy
- default
labels:
- "traefik.enable=true"
- "traefik.docker.network=proxy"
- "traefik.http.routers.app-secure.entrypoints=websecure"
- "traefik.http.routers.app-secure.rule=Host(`my-test.localhost`)"
networks:
proxy:
external: true
it works!
Now I want to go on the next step and use it to build a MERN stack project and I'm a bit lost.
usually I dockerize a mern stack by:
create a dockerfile in /server
create a dockerfile in /client
create a docker-compose on the root directory
version: "3.7"
services:
server:
build:
context: ./server
dockerfile: Dockerfile
image: myapp-server
container_name: myapp-node-server
command: /usr/src/app/node_modules/.bin/nodemon server.js
volumes:
- ./server/:/usr/src/app
- /usr/src/app/node_modules
ports:
- 5000
depends_on:
- mongo
env_file: ./server/.env
environment:
- NODE_ENV=development
networks:
- app-network
- proxy
mongo:
image: mongo
volumes:
- data-volume:/data/db
ports:
- 27017
networks:
- app-network
- proxy
client:
build:
context: ./client
dockerfile: Dockerfile
image: myapp-client
container_name: myapp-react-client
command: npm start
volumes:
- ./client/:/usr/app
- /usr/app/node_modules
depends_on:
- server
ports:
- 3001
networks:
- app-network
labels:
- "traefik.enable=true"
- "traefik.docker.network=proxy"
- "traefik.http.routers.app2-secure.entrypoints=websecure"
- "traefik.http.routers.app2-secure.rule=Host(`test.localhost`)"
networks:
app-network:
driver: bridge
proxy:
external: true
volumes:
data-volume:
node_modules:
web-root:
driver: local
But seems that my proxy not working because the console doesn't return any error and it works on localhost:3000 but on test.localhost I have a "Gateway Timeout" error
I am trying to set up nginx inside docker.
However, I ran into a problem, when going to example.com, the browser does not display the site. But if I go to the ip address (for example 111.111.111.111) or try to go to the site through the port: example.com:8000, then everything works. How to solve this problem?
docker-compose.yml
version: "3"
services:
postgresql:
build:
context: ./docker/postgres
dockerfile: Dockerfile
environment:
- POSTGRES_PASSWORD=password
volumes:
- ./docker/postgres/init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
restart: unless-stopped
django:
build:
context: ./
dockerfile: Dockerfile
env_file:
- ./.env
volumes:
- ./:/usr/src/app
ports:
- "8000:8000"
depends_on:
- postgresql
restart: unless-stopped
nginx:
build:
context: ./docker/nginx
dockerfile: Dockerfile
ports:
- "80:80"
depends_on:
- postgresql
- django
restart: unless-stopped
docker/nginx/nginx.conf
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://django:8000;
}
}
docker/nginx/Dockerfile
FROM nginx:latest
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
What I am trying to achieve
I am trying to integrate an SSL certificate for my production site, bonus if I could create a self-signed certificate for local development.
The issue I am having
When trying to integrate the nginx-proxy and letsencrypt-companion it always results in a redirect loop or 502 Bad gateway error.
I've looked around at a variety of ways to integrate these 2 companions but I still am stuck about how to integrate this but always ask myself the same questions when attempting to integrate into my environment.
More detail about my environment
I am running a multi-container Docker Compose web app which uses PHP/PHP-FPM 7.2, MySQL & Nginx. The config looks like:
version: '3.1'
networks:
mywebapp:
services:
nr_nginx:
build: ./env/nginx
networks:
- mywebapp
ports:
- 80:80
- 443:443
env_file:
- ./env/nginx/.env
depends_on:
- nr_php72
tty: true
volumes:
- ./src:/home/www/mywebapp/src
- ./storage:/home/www/storage/mywebapp
- ./data/nginx/logs:/var/log/nginx
- ./env/nginx/webserver/nginx.conf:/etc/nginx/nginx.conf
- ./env/nginx/webserver/conf.d:/etc/nginx/conf.d
- ./env/nginx/webserver/defaults:/etc/nginx/defaults
- ./env/nginx/webserver/global:/etc/nginx/global
- ./env/nginx/ssl/:/etc/letsencrypt/
- ./env/nginx/share:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
nr_mysql:
build: ./env/mysql
networks:
- mywebapp
ports:
- 3306:3306
env_file:
- ./env/mysql/.env
volumes:
- ./data/mysql:/var/lib/mysql
- ./env/mysql/conf.d:/etc/mysql/conf.d
- ./data/dbimport/:/docker-entrypoint-initdb.d
nr_php72:
build: ./env/php72
hostname: php72
networks:
- mywebapp
depends_on:
- nr_mysql
ports:
- 9000:9000
- 8080:8080
volumes:
- ./env/composer:/home/www/.composer
- ./env/global/bashrc:/home/www/.bashrc
- ./data/bash/.bash_history:/home/www/.bash_history
- ~/.ssh:/home/www/.ssh:ro
- ~/.gitconfig:/home/www/.gitconfig:ro
- ./storage:/home/www/storage/mywebapp
- ./src:/home/www/mywebapp/src
Questions
Should the nginx-proxy replace my existing "nr_nginx" container?
Do I have remove the 80:80, 433:433 port mapping for "nr_nginx" and instead assign a random unique port of my choice e.g. 5000?
If yes to question 2, how do I tell the nginx-proxy to proxy pass to my container of port 5000?
Okay, so I think I have solved it:
No it shouldn't replace your own nginx configuration
Yes, remove ports 80 and 443 as this wil be handled by the nginx-proxy, rather expose the ports in your container.
You do not require manually configuring proxy_pass as nginx-proxy does this for you, so long as you specify a VIRTUAL_PORT environment variable.
Here is the boilerplate code that worked for me:
Boilerpalte nginx-proxy-letsencrypt-companion
docker-compose.yml:
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./env/nginx/certs:/etc/nginx/certs
- ./env/nginx/vhost.d:/etc/nginx/vhost.d
- ./env/nginx/share:/usr/share/nginx/html
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: letsencrypt
volumes:
- ./env/nginx/certs:/etc/nginx/certs
- ./env/nginx/vhost.d:/etc/nginx/vhost.d
- ./env/nginx/share:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
networks:
default:
external:
name: nginx-proxy
Boilerplate Nginx PHP MySQL Environment
docker-compose.yml
version: '3.1'
services:
nginx:
container_name: nginx
build: ./env/nginx
ports:
- 5000:5000
expose:
- 80
- 443
environment:
- VIRTUAL_HOST=your.domain.com,www.your.domain.com
- VIRTUAL_PORT=5000
- LETSENCRYPT_EMAIL=your#email.com
- LETSENCRYPT_HOST=your.domain.com
depends_on:
- php72
tty: true
volumes:
- ./src:/home/www/webapp/src
- ./storage:/home/www/storage/webapp
- ./data/nginx/logs:/var/log/nginx
- ./env/nginx/webserver/nginx.conf:/etc/nginx/nginx.conf
- ./env/nginx/webserver/conf.d:/etc/nginx/conf.d
- ./env/nginx/webserver/defaults:/etc/nginx/defaults
- ./env/nginx/webserver/global:/etc/nginx/global
- /var/run/docker.sock:/tmp/docker.sock:ro
mysql:
container_name: mysql
build: ./env/mysql
ports:
- 3306:3306
env_file:
- ./env/mysql/.env
volumes:
- ./data/mysql:/var/lib/mysql
- ./env/mysql/conf.d:/etc/mysql/conf.d
- ./data/dbimport/:/docker-entrypoint-initdb.d
php72:
container_name: php72
build: ./env/php72
hostname: php72
depends_on:
- mysql
ports:
- 9000:9000
volumes:
- ./env/composer:/home/www/.composer
- ./env/global/bashrc:/home/www/.bashrc
- ./data/bash/.bash_history:/home/www/.bash_history
- ~/.ssh:/home/www/.ssh:ro
- ~/.gitconfig:/home/www/.gitconfig:ro
- ./storage:/home/www/storage/webapp
- ./src:/home/www/webapp/src
networks:
default:
external:
name: nginx-proxy
/etc/nginx/conf.d/default.conf - inside "nginx" container:
server {
listen 5000;
listen [::]:5000;
server_name www.your.domain.com;
root /my/web/root/src;
index index.php;
include /any/conf/includes/here.conf;
location / {
fastcgi_param HTTPS 'on';
try_files $uri $uri/ /index.php$is_args$args;
}
}
The fastcgi_param HTTPS 'on'; conf prevents a redirect loop, you can alternatively add $_SERVER['HTTPS'] = 'on'; to your index.php