503 responses with jwilder/nginx-proxy from docker-compose nginx setup - docker

I have a server that runs ssl (certbot) nginx and points all traffic to port 5555 where my nginx-proxy is running. I'm trying to get it to route all my traffic to the appropriate services.
Here is my docker-compose setup:
nginx-proxy:
container_name: nginx-proxy
image: jwilder/nginx-proxy
ports:
- '5555:80'
networks:
app_net:
ipv4_address: 172.26.111.111 (from subnet 0/24)
environment:
- VIRTUAL_PORT=5555
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx/prod/proxy.conf:/etc/nginx/conf.d/proxy.conf:ro
text-rewriter-service:
container_name: text-rewriter-service
build:
context: ./text-rewriter-service
ports:
- '8001:8001'
networks:
app_net:
ipv4_address: 172.26.111.13
environment:
- APP_ENV=prod
- NODE_ENV=production
- PORT=8001
And my nginx proxy.conf file
server {
server_name localhost;
listen 80;
access_log /var/log/nginx/access.log;
listen [::]:80;
# text-rewriter-service
location ~* ^/graphql(/?)(.*)$ {
set $query $2;
proxy_pass http://172.26.111.13:8001$1$query$is_args$args;
}
}
nginx conf in server
server {
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://0.0.0.0:5555;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
I've tried (and failed):
IP with text-rewriter-service:8001
tried changing the server to example.com
tried using VIRTUAL_HOST and VIRTUAL_PORT in the app
tried removing VIRTUAL_PORT from nginx environment
here is nginx output www.example.com 172.26.111.1 - - [08/Dec/2018:01:18:19 +0000] "GET /graphql HTTP/1.0" 503 615 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36"
possible solution i didn't fully understand how to get to work : https://github.com/jwilder/nginx-proxy/issues/582
I think it's on the server nginx proxy_header I need to change??

Related

Redirect Odoo 8069 to HTTPS without VPC config (AWS/VPS)

I created a Github repo weeks ago with Docker Compose, Odoo, PostgreSQL, Certbot, Nginx as a proxy server, and a little bit of PHP stuff (Symfony) -> https://github.com/Inushin/dockerOdooSymfonySSL When I was trying the config I found that NGINX worked as it was supposed to and you get the correct HHTP -> HTTPS redirect, BUT if you put the port 8069, the browser goes to HTTP. One of the solutions should be configured de another VPC, but I was thinking about using this repo for other "minimal VPS services" and not needing another VPC, so... how could I solve this? Maybe from Odoo config? Is something missing in the NGINX conf?
NGINX
#FOR THE ODOO DOMAIN
server {
listen 80;
server_name DOMAIN_ODOO;
server_tokens off;
location / {
return 301 https://$server_name$request_uri;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name DOMAIN_ODOO;
server_tokens off;
location / {
proxy_pass http://web:8069;
proxy_set_header Host DOMAIN_ODOO;
proxy_set_header X-Forwarded-For $remote_addr;
}
ssl_certificate /etc/letsencrypt/live/DOMAIN_ODOO/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/DOMAIN_ODOO/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
docker-compose.yml
nginx:
image: nginx:1.15-alpine
expose:
- "80"
- "443"
ports:
- "80:80"
- "443:443"
networks:
- default
volumes:
- ./data/nginx:/etc/nginx/conf.d/:rw
- ./data/certbot/conf:/etc/letsencrypt/:rw
- ./data/certbotSymfony/conf:/etc/letsencrypt/symfony/:rw
- ./data/certbotSymfony/www:/var/www/certbot/:rw
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
web:
image: odoo:13.0
depends_on:
- db
ports:
- "8069:8069/tcp"
volumes:
- web-data:/var/lib/odoo
- ./data/odoo/config:/etc/odoo
- ./data/odoo/addons:/mnt/extra-addons

502 Error on Production Deployment Django & Nginx using a docker compose file

I am using docker-compose to build containers and to serve the frontend of my website at https:// example.com and the backend at a subdomain, https:// api.example.com. The SSL certificates for both the root and subdomain are working properly, and I can access the live site (static files served by Nginx) at https:// example.com so at least half of the configuration is working properly. The problem occurs when the frontend tries to communicate with the backend. All calls are met with a "No 'Access-Control-Allow-Origin'" 502 Error in the console logs. In the logs of the docker container, this is the error response.
Docker Container Error
2022/03/09 19:01:21 [error] 30#30: *7 connect() failed (111: Connection refused) while connecting
to upstream, client: xxx.xx.xxx.xxx, server: api.example.com, request: "GET /api/services/images/
HTTP/1.1", upstream: "http://127.0.0.1:8000/api/services/images/",
host: "api.example.com", referrer: "https://example.com/"
I think it's likely that something is wrong with my Nginx or docker-compose configuration. When setting the SECURE_SSL_REDIRECT, SECURE_HSTS_INCLUDE_SUBDOMAINS, and the SECURE_HSTS_SECONDS to False or None (in the Django settings) I am able to hit http:// api.example.com:8000/api/services/images/ and get the data I am looking for. So it is running and hooked up, just not taking requests from where I want it to be. I've attached the Nginx configuration and the docker-compose.yml. Please let me know if you need more info, I would greatly appreciate any input, and thanks in advance for the help.
Nginx-custom.conf
# Config for the frontend application under example.com
server {
listen 80;
server_name example.com www.example.com;
if ($host = www.example.com) {
return 301 https://$host$request_uri;
}
if ($host = example.com) {
return 301 https://$host$request_uri;
}
return 404;
}
server {
server_name example.com www.example.com;
index index.html index.htm;
add_header Access-Control-Allow-Origin $http_origin;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Headers $http_access_control_request_headers;
add_header Access-Control-Allow-Methods $http_access_control_request_method;
location / {
root /usr/share/nginx/html;
try_files $uri /index.html =404;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
##### Config for the backend server at api.example.com
server {
listen 80;
server_name api.example.com;
return 301 https://$host$request_uri;
}
server {
server_name api.example.com;
add_header Access-Control-Allow-Origin $http_origin;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Headers $http_access_control_request_headers;
add_header Access-Control-Allow-Methods $http_access_control_request_method;
location / {
proxy_pass http://127.0.0.1:8000/; #API Server
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
Docker-Compose File
version: '3.9'
# services that make up the development env
services:
# DJANGO BACKEND
backend:
container_name: example-backend
restart: unless-stopped
image: example-backend:1.0.1
build:
context: ./backend/src
dockerfile: Dockerfile
command: gunicorn example.wsgi:application --bind 0.0.0.0:8000
ports:
- 8000:8000
environment:
- SECRET_KEY=xxx
- DEBUG=0
- ALLOWED_HOSTS=example.com,api.example.com,xxx.xxx.xxx.x
- DB_HOST=postgres-db
- DB_NAME=xxx
- DB_USER=xxx
- DB_PASS=xxx
- EMAIL_HOST_PASS=xxx
# sets a dependency on the db container and there should be a network connection between the two
networks:
- db-net
- shared-network
links:
- postgres-db:postgres-db
depends_on:
- postgres-db
# POSTGRES DATABASE
postgres-db:
container_name: postgres-db
image: postgres
restart: always
volumes:
- example-data:/var/lib/postgresql/data
ports:
- 5432:5432
environment:
- POSTGRES_DB=exampledb
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
networks:
- db-net
# ANGULAR & NGINX FRONTEND
frontend:
container_name: example-frontend
build:
context: ./frontend
ports:
- "80:80"
- "443:443"
networks:
- shared-network
links:
- backend
depends_on:
- backend
networks:
shared-network:
driver: bridge
db-net:
volumes:
example-data:

Why is https not working for my site hosted in docker?

I have a site running in docker with 4 containers, a react front end, .net backend, sql database and nginx server. My docker compose file looks like this:
version: '3'
services:
sssfe:
image: mydockerhub:myimage-fe-1.3
ports:
- 9000:9000
volumes:
- sssfev:/usr/share/nginx/html
depends_on:
- sssapi
sssapi:
image: mydockerhub:myimage-api-1.3
environment:
- SQL_CONNECTION=myconnection
ports:
- 44384:44384
depends_on:
- jbdatabase
jbdatabase:
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=mypass
volumes:
- dbdata:/var/opt/mssql
ports:
- 1433:1433
reverseproxy:
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- "80:80"
- "443:443"
volumes:
- example_certbot-etc:/etc/letsencrypt
links :
- sssfe
certbot:
depends_on:
- reverseproxy
image: certbot/certbot
container_name: certbot
volumes:
- example_certbot-etc:/etc/letsencrypt
- sssfev:/usr/share/nginx/html
command: certonly --webroot --webroot-path=/usr/share/nginx/html --email myemail --agree-tos --no-eff-email --force-renewal -d example.com -d www.example.com
volumes:
example_certbot-etc:
external: true
dbdata:
sssfev:
I was following this link and am using cerbot and letsencrypt for the certificate. My nginx conf file is this:
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
location / {
rewrite ^ https://$host$request_uri? permanent;
}
location ~ /.well-known/acme-challenge {
allow all;
root /usr/share/nginx/html;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
index index.html index.htm;
root /usr/share/nginx/html;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/nginx/conf.d/options-ssl-nginx.conf;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
My issue is that https doesn't work for my site. When I hit https://example.com, I get ERR_CONNECTION_REFUSED. The non https site resolves and works fine however. Can't figure out what's going on. It looks like the ssl port is open and nginx is listening to it:
ss -tulpn | grep LISTEN
tcp LISTEN 0 128 *:9000 *:* users:(("docker-proxy",pid=18336,fd=4))
tcp LISTEN 0 128 *:80 *:* users:(("docker-proxy",pid=18464,fd=4))
tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=420,fd=4))
tcp LISTEN 0 128 *:1433 *:* users:(("docker-proxy",pid=18152,fd=4))
tcp LISTEN 0 128 *:443 *:* users:(("docker-proxy",pid=18452,fd=4))
tcp LISTEN 0 128 *:44384 *:* users:(("docker-proxy",pid=18243,fd=4))
And my containers:
reverseproxy 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp
sssfe 80/tcp, 0.0.0.0:9000->9000/tcp
sssapi 0.0.0.0:44384->44384/tcp
database 0.0.0.0:1433->1433/tcp
I'm assuming it's an issue with my nginx config, but I'm new to this and not sure where to go from here.
If you need to support SSL, please do this:
mkdir /opt/docker/nginx/conf.d -p
touch /opt/docker/nginx/conf.d/nginx.conf
mkdir /opt/docker/nginx/cert -p
then
vim /opt/docker/nginx/conf.d/nginx.conf
If you need to force the redirection to https when accessing http:
server {
listen 443 ssl;
server_name example.com www.example.com; # domain
# Pay attention to the file location, starting from /etc/nginx/
ssl_certificate 1_www.example.com_bundle.crt;
ssl_certificate_key 2_www.example.com.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE;
ssl_prefer_server_ciphers on;
client_max_body_size 1024m;
location / {
proxy_set_header HOST $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#Intranet address
proxy_pass http://172.17.0.8:9090; #change it
}
}
server {
listen 80;
server_name example.com www.example.com; # The domain name of the binding certificate
#HTTP domain name request is converted to https
return 301 https://$host$request_uri;
}
docker run -itd --name nginx -p 80:80 -p 443:443 -v /opt/docker/nginx/conf.d/nginx.conf:/etc/nginx/conf.d/nginx.conf -v /opt/docker/nginx/cert:/etc/nginx -m 100m nginx
After startup, enter docker ps to see if the startup is successful
docker logs nginx view logs.

Docker nginx reverseProxy Connection Refused

I have 2 projects one called defaultWebsite and the other one nginxProxy.
I am trying to set up the following:
in /etc/hosts i have setup 127.0.0.1 default.local, docker containers are running for all. I did not add a php-fpm container for the reverseProxy (Should i?)
nginxReverseProxy default.config
#sample setup
upstream default_local {
server host.docker.internal:31443;
}
server {
listen 0.0.0.0:80;
return 301 https://$host$request_uri;
}
server {
listen 0.0.0.0:443 ssl;
server_name default.local;
ssl_certificate /etc/ssl/private/localhost/default_dev.crt;
ssl_certificate_key /etc/ssl/private/localhost/default_dev.key;
#ssl_verify_client off;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
index index.php index.html index.htm index.nginx-debian.html;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $proxy_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://default_local;
}
}
defaultWebsite config:
server {
listen 0.0.0.0:80;
server_name default.local;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 0.0.0.0:443 ssl;
server_name default.local;
root /app/public;
#this is for local. on production this will be different.
ssl_certificate /etc/ssl/default.local/localhost.crt;
ssl_certificate_key /etc/ssl/default.local/localhost.key;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
internal;
}
# return 404 for all other php files not matching the front controller
# this prevents access to other php files you don't want to be accessible.
location ~ \.php$ {
return 404;
}
error_log /var/log/nginx/default_error.log;
access_log /var/log/nginx/default_access.log;
}
docker-compose.yml for defaultWebsite:
services:
nginx:
build: DockerConfig/nginx
working_dir: /app
volumes:
- .:/app
- ./log:/log
- ./data/nginx/htpasswd:/etc/nginx/.htpasswd
- ./data/nginx/nginx_dev.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php-fpm
- mysql
links:
- php-fpm
- mysql
ports:
- "31080:80"
- "31443:443"
expose:
- "31080"
- "31443"
environment:
VIRUAL_HOST: "default.local"
APP_FRONT_CONTROLLER: "public/index.php"
networks:
default:
aliases:
- default
php-fpm:
build: DockerConfig/php-fpm
working_dir: /app
volumes:
- .:/app
- ./log:/log
- ./data/php-fpm/php-ini-overrides.ini:/etc/php/7.3/fpm/conf.d/99-overrides.ini
ports:
- "30902:9000"
expose:
- "30902"
extra_hosts:
- "default.local:127.0.0.1"
networks:
- default
environment:
XDEBUG_CONFIG: "remote_host=172.29.0.1 remote_enable=1 remote_autostart=1 idekey=\"PHPSTORM\" remote_log=\"/var/log/xdebug.log\""
PHP_IDE_CONFIG: "serverName=default.local"
docker-compose.yml for nginxReverseProxy:
services:
reverse_proxy:
build: DockerConfig/nginx
hostname: reverseProxy
ports:
- 80:80
- 443:443
extra_hosts:
- "host.docker.internal:127.0.0.1"
volumes:
- ./data/nginx/dev/default_dev.conf:/etc/nginx/conf.d/default.conf
- ./data/certs:/etc/ssl/private/
docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6e9a8479e6f8 default_nginx "nginx -g 'daemon of…" 12 hours ago Up 12 hours 31080/tcp, 31443/tcp, 0.0.0.0:31080->80/tcp, 0.0.0.0:31443->443/tcp default_nginx_1
5e1df4d6f1f5 default_php-fpm "/usr/sbin/php-fpm7.…" 12 hours ago Up 12 hours 30902/tcp, 0.0.0.0:30902->9000/tcp default_php-fpm_1
f3ec76cd7148 default_mysql "/entrypoint.sh mysq…" 12 hours ago Up 12 hours (healthy) 33060/tcp, 0.0.0.0:31336->3306/tcp default_mysql_1
d633511bc6a8 proxy_reverse_proxy "/bin/sh -c 'exec ng…" 12 hours ago Up 12 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp proxy_reverse_proxy_1
If i access directly default.local:31443 i can see the page working.
When i try to access http://default.local it redirects me to https://default.local but in the same time i get this error:
reverse_proxy_1 | 2020/04/14 15:22:43 [error] 6#6: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.80.1, server: default.local, request: "GET / HTTP/1.1", upstream: "https://127.0.0.1:31443/", host: "default.local"
Not sure this is the answer, but the writing is to too long for a comment.
On your nginx conf, you have:
upstream default_local {
server host.docker.internal:31443;
}
and as i see it (could be wrong here;), you have a different container accessing it:
extra_hosts:
- "host.docker.internal:127.0.0.1"
but you set the hostname to 127.0.0.1, shouldn't it be the docker host ip. Since it is connecting to a different container?
In general ensure the docker host ip is used on all containers, when they need to connect to another container/outside.
ok, so it seems that the docker ip should be used on linux machines because this "host.docker.internal" variable does not exists yet (to be added in a future version)
to get docker ip in linux should be enough to run ip addr | grep "docker"
so the final config should look something like this for reverse_proxy default.conf:
upstream default_name {
server 172.17.0.1:52443;
}
#redirect to https
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
server_name default.localhost;
listen 443 ssl http2;
large_client_header_buffers 4 16k;
ssl_certificate /etc/ssl/private/localhost/whatever_dev.crt;
ssl_certificate_key /etc/ssl/private/localhost/whatever_dev.key;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
index index.php index.html index.htm index.nginx-debian.html;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $proxy_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://default_name;
}
}

Dockerized Nginx redirect different subdomains to the same page

Background
I am trying to set up a home server with a bunch of webapps. For now, I am focusing on seafile and a static html page. The server uses Docker and Nginx.
I want to achieve the following behavior:
The address domain-name.eu redirects to the static pages, saying "Welcome to Domain Name".
The address seafile.domain-name.eu redirects to the seafile container.
Problem description
I followed various tutorials on the web on how to set up a docker-compose.yml and the nginx conf to allow nginx to serve different website. I manage to have my seafile working alone behind nginx on the right address, and I manage to have the static page working alone behind nginx on the right address. When I try to mix both, both domain-name.eu and seafile.domain-name.eu serve the static page "Welcome to Domain Name".
Here is my docker-compose.yml:
nginx:
image: nginx
ports:
- 80:80
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./html/:/usr/share/nginx/html
links:
- seafile
seafile:
image: seafileltd/seafile:latest
expose:
- 80
volumes:
- /home/docker/seafile-data:/shared
And my nginx.conf:
http {
upstream seafile {
server seafile;
}
server {
listen 80;
server_name seafile.domain-name.eu;
location /{
proxy_pass http://seafile/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 80;
server_name domain-name.eu;
root /usr/share/nginx/html;
index index.html index.htm;
}
}
events {}
When I try to access seafile.domain-name.eu, I receive this log from the nginx container:
nginx_1 | xxx.xxx.xxx.xxx - - [05/Jun/2018:09:44:24 +0000] "GET / HTTP/1.1" 200 22 "http://seafile.domain-name.eu/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0"
And when I try to access domain-name.eu, I receive this:
nginx_1 | xxx.xxx.xxx.xxx - - [05/Jun/2018:10:07:11 +0000] "GET / HTTP/1.1" 200 22 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0"
So the address is indeed recognized for the seafile, which helped me eliminating a bad configuration of my DNS as a possible cause. Or am I mistaken?
Can anyone help me troubleshooting the problem? 
Thanks.
EDIT: Adding docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b6d018169d76 nginx "nginx -g 'daemon of…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp jarvis-compose_nginx_1
7e701ce7650d seafileltd/seafile:latest "/sbin/my_init -- /s…" About an hour ago Up About an hour 80/tcp jarvis-compose_seafile_1
EDIT 2 : the problem was due to a configuration error (see accepted answer) + a residual redirection from my old registrar that was causing weird behavior. Thanks for the help! 
Have trying this to run on my local. Found that you've mount wrong nginx config file in the container. nginx.conf should mounted on /etc/nginx/conf.d/default.conf, it's the default config to supporting vhost on nginx. Below are the correct setups:
nginx.conf
upstream seafile {
server seafile;
}
server {
listen 80;
server_name seafile.domain-name.eu;
location /{
proxy_pass http://seafile/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 80;
server_name domain-name.eu;
root /usr/share/nginx/html;
index index.html index.htm;
}
docker-compose.yml
version: '3'
services:
nginx:
image: nginx
ports:
- 80:80
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
- ./html/:/usr/share/nginx/html
links:
- seafile
container_name: web
seafile:
image: seafileltd/seafile:latest
expose:
- 80
volumes:
- /home/docker/seafile-data:/shared
container_name: seafile_server

Resources