I have a VM with Ubuntu and Docker.
I have two containers (ASP .NET 6 app and nginx server)
My docker compose file looks like this:
version: '3.9'
services:
nginx:
container_name: nginx
image: nginx
ports:
- "80:80"
depends_on:
- myapp
volumes:
- ./nginx/conf.d/:/etc/nginx/conf.d/
restart: always
myapp:
container_name: myapp
image: <...>
restart: always
volumes:
- logs:/mnt/logs
- data:/mnt/data
volumes:
logs:
data:
when i try to connect to port 80 it says "connection refused".
If I try it with default nginx config, then default nginx page is opened.
if I open port 5000 for myapp, then i can access it on this port.
nginx config:
server {
listen: 80;
location / {
proxy_pass http://myapp:5000;
}
}
Problem is solved: my config file mount was incorrect.
Correct version:
<...>
volumes:
- nginx-config:/etc/nginx/conf.d
<...>
volumes:
logs:
data:
nginx-config:
and now i can change file /var/lib/docker/volumes/coi_nginx-config/_data/default.conf
Related
I have one docker container with nginx (path docker/nginx):
services:
web:
image: nginx
volumes:
- ./templates:/etc/nginx/templates
ports:
- "80:80"
networks:
- nginx-net
networks:
nginx-net:
name: proxynet
external: true
And docker with app (docker/app):
services:
php:
build:
context: .
dockerfile: Dockerfile
container_name: app_php
networks:
- nginx-net
networks:
nginx-net:
external: true
name: proxynet
File nginx conf.template
server {
listen 80 default_server;
server_name subdomain.domain.com;
location /{
proxy_pass app_php:80/;
}
}
I want nginx to redirect to the app container and to other containers in the future.
But in web browser subdomain.domain.com show 504 error.
ping -c 4 app_php - ping from nginx container is ok
I spent a few hours and I don't know what the problem is, I suspect nginx conf
You need to expose app_php on another port eg. 8000 and have nginx act as proxy for that. Something like:
services:
php:
build:
context: .
dockerfile: Dockerfile
container_name: app_php
networks:
- nginx-net
ports:
- 8000:80
nginx.conf:
server {
listen 80 default_server;
server_name subdomain.domain.com;
location /{
proxy_pass app_php:8000;
}
}
Probably best to have all containers in a single docker-compose.yml file also.
I try to run nginx service in docker-compose, and get this error.
My docker-compose file
version: '3.7'
services:
nginx:
image: nginx:alpine
container_name: nginx
ports:
- 80:80
volumes:
- ./apps/:/apps
- ./services/nginx/conf.d/:/etc/nginx/conf.d/
depends_on:
- php73
- php80
restart: always
networks:
mp-network:
ipv4_address: 192.168.220.10
...
It worked before it... I don't have local nginx or apache2 (port 80 free), and i tried to change port in docker-compose file - result the same.
Help please.
I solved this problem. The reason was in ipv4_address: 192.168.220.10 - I had 10 services in ipv4_address: 192.168.220.* ips, and 11th was nginx with hard defined ip, wich was busy
I am trying to setup an nginx container that will show at the "http://server_ip/" path the nginx html page and on the "/app" path the tutum/hello-world container. as a follow up, want to be able to get to the "hello-world" container only from the "http://server_ip/app" path and not via http://server_ip:1500.
I created the following docker-compose:
version: '3'
services:
proxy:
container_name: proxy
image: nginx
ports:
- "80:80"
volumes:
- $PWD/html:/usr/share/nginx/html
- $PWD/config/nginx.conf:/etc/nginx/conf.d/nginx.conf
networks:
- backend
webapp:
container_name: webapp
image: tutum/hello-world
ports:
- "1500:80"
networks:
- backend
networks:
backend:
then I have the following nginx.conf file:
server {
listen 80; # not really needed, but more informative
location = / {
root /usr/share/nginx/html;
}
location = /app/ {
proxy_pass http://localhost:1500/;
}
}
If I try to get to each of the containers by their http://server_ip:PORT, I get there. if I try http://server_ip/app I get "404 not found". what am I missing? did I put the conf file in the wrong folder? how do I limit the availability of the "hello-world" only to the "http://server_ip/app" path and not via "http://server_ip:1500".
Your containers using the "backend" docker network, as you stated in the compose file.
Inside that they reach each other with the service names, so from the "proxy" service you can reach "webapp" service on http://webapp (or http://webapp:80) and from the "webapp" service you can reach "proxy" on http://proxy or (http://proxy:80).
On your computer if you type http://localhost:1500/ you will reach the webapp service and if you type http://localhost:80/ you will reach proxy service.
The port mapping 1500:80 means that your computer 1500 port is mapped to the webapp container 80 port.
So in nginx.conf do this:
proxy_pass http://webapp:80/;
Also if you want to make your webapp not accessible from your host on localhost:1500 remove the ports part in the webapp service spec:
version: '3'
services:
proxy:
container_name: proxy
image: nginx:1.11
ports:
- "80:80"
volumes:
- $PWD/html:/usr/share/nginx/html
- $PWD/nginx.conf:/etc/nginx/conf.d/default.conf
networks:
- backend
webapp:
container_name: webapp
image: tutum/hello-world
networks:
- backend
networks:
backend:
I am trying to create the following with a reverse proxy (the webs only work) but with the mail server, it gives port conflict error in docker-compose.
What I want is that the webs coexist with the mail service. For example mail.sitea.com
Any suggestion?
version: '3.4'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: always
ports:
- 443:443
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
site-a:
image: nginx
restart: always
expose:
- '80'
volumes:
- /var/www/site-a/public_html/:/usr/share/nginx/html:ro
environment:
- VIRTUAL_HOST=sitea.com
site-b:
image: nginx
restart: always
expose:
- '80'
volumes:
- /var/www/site-b/public_html/:/usr/share/nginx/html:ro
environment:
- VIRTUAL_HOST=siteb.com
poste:
image: analogic/poste.io
restart: always
network_mode: "host"
expose:
- 25
- 80
- 443
- 110
- 143
- 465
- 587
- 993
- 995
volumes:
- /docker/mail:/data
environment:
- HTTPS=ON
- DISABLE_CLAMAV=TRUE
OUTPUT
/docker# vim /docker/docker-compose.yml
root#adroconstruccion:/docker# docker-compose up -d
Removing docker_nginx-proxy_1
docker_site-b_1 is up-to-date
docker_site-a_1 is up-to-date
Starting 08fba2b58471_docker_nginx-proxy_1 ... error
Recreating docker_poste_1 ...
ERROR: for 08fba2b58471_docker_nginx-proxy_1 Cannot start service nginx-proxy: driver failed programming external connectivity on endpoint 08fba2b58471_docker_nginx-proxy_1 (095659f459c1af8c729129074520b40800e528719727061bdbc4bfa25f6c37d5)Recreating docker_poste_1 ... done
ERROR: for nginx-proxy Cannot start service nginx-proxy: driver failed programming external connectivity on endpoint 08fba2b58471_docker_nginx-proxy_1 (095659f459c1af8c729129074520b40800e528719727061bdbc4bfa25f6c37d5): Error starting userland proxy: listen tcp 0.0.0.0:443: bind: address already in use
ERROR: Encountered errors while bringing up the project.
If you run poste with network_mode: "host" it will listen on those ports on the local network. So the port 443 listener in the poste service conflicts with nginx-proxy
Depending on what you're using the poste continer for you might be able to get away with just commenting out those ports (80 and 443) and just using the mail ports. Or you could remove the network_mode: "host" and maybe proxy them through nginx?
Heyo!
Update: I figured it out and added my answer.
I'm currently in the process of learning docker and I've written a docker-compose file that should launch nginx, gitea, nextcloud and route them all via domain name as a reverse proxy.
All is going well except for with nextcloud. I can access it via localhost:3001 but not via the nginx reverse proxy. All is well with gitea, it works both ways.
The error I'm getting is:
nginx_proxy | 2018/08/10 00:17:34 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: cloud.example.ca, request: "GET / HTTP/1.1", upstream: "http://172.19.0.4:3001/", host: "cloud.example.ca"
docker-compose.yml:
version: '3.1'
services:
nginx:
container_name: nginx_proxy
image: nginx:latest
restart: always
volumes:
// Here I'm swapping out my default.conf for the container's by mounting my
directory over theirs.
- ./nginx-conf:/etc/nginx/conf.d
ports:
- 80:80
- 443:443
networks:
- proxy
nextcloud_db:
container_name: nextcloud_db
image: mariadb:latest
restart: always
volumes:
- nextcloud_db:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/cloud_db_root
MYSQL_PASSWORD_FILE: /run/secrets/cloud_db_pass
MYSQL_DATABASE: devcloud
MYSQL_USER: devcloud
secrets:
- cloud_db_root
- cloud_db_pass
networks:
- database
gitea_db:
container_name: gitea_db
image: mariadb:latest
restart: always
volumes:
- gitea_db:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/cloud_db_root
MYSQL_PASSWORD_FILE: /run/secrets/cloud_db_pass
MYSQL_DATABASE: gitea
MYSQL_USER: gitea
secrets:
- cloud_db_root
- cloud_db_pass
networks:
- database
nextcloud:
image: nextcloud
container_name: nextcloud
ports:
- 3001:80
volumes:
- nextcloud:/var/www/html
restart: always
networks:
- proxy
- database
gitea:
container_name: gitea
image: gitea/gitea:latest
environment:
- USER_UID=1000
- USER_GID=1000
restart: always
volumes:
- gitea:/data
ports:
- 3000:3000
- 22:22
networks:
- proxy
- database
volumes:
nextcloud:
nextcloud_db:
gitea:
gitea_db:
networks:
proxy:
database:
secrets:
cloud_db_pass:
file: cloud_db_pass.txt
cloud_db_root:
file: cloud_db_root.txt
My default.conf that gets mounted into /etc/nginx/conf.d/default.conf
upstream nextcloud {
server nextcloud:3001;
}
upstream gitea {
server gitea:3000;
}
server {
listen 80;
listen [::]:80;
server_name cloud.example.ca;
location / {
proxy_pass http://nextcloud;
}
}
server {
listen 80;
listen [::]:80;
server_name git.example.ca;
location / {
proxy_pass http://gitea;
}
}
I of course have my hosts file setup to route the domains to localhost. I've done a bit of googling but nothing I've found so far seems to align with what I'm running into. Thanks in advance!
Long story short, one does not simply reverse proxy to port 80 with nextcloud. It's just not allowed. I have it deployed and working great with a certificate over 443! :)