I have 2 projects one called defaultWebsite and the other one nginxProxy.
I am trying to set up the following:
in /etc/hosts i have setup 127.0.0.1 default.local, docker containers are running for all. I did not add a php-fpm container for the reverseProxy (Should i?)
nginxReverseProxy default.config
#sample setup
upstream default_local {
server host.docker.internal:31443;
}
server {
listen 0.0.0.0:80;
return 301 https://$host$request_uri;
}
server {
listen 0.0.0.0:443 ssl;
server_name default.local;
ssl_certificate /etc/ssl/private/localhost/default_dev.crt;
ssl_certificate_key /etc/ssl/private/localhost/default_dev.key;
#ssl_verify_client off;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
index index.php index.html index.htm index.nginx-debian.html;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $proxy_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://default_local;
}
}
defaultWebsite config:
server {
listen 0.0.0.0:80;
server_name default.local;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 0.0.0.0:443 ssl;
server_name default.local;
root /app/public;
#this is for local. on production this will be different.
ssl_certificate /etc/ssl/default.local/localhost.crt;
ssl_certificate_key /etc/ssl/default.local/localhost.key;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
internal;
}
# return 404 for all other php files not matching the front controller
# this prevents access to other php files you don't want to be accessible.
location ~ \.php$ {
return 404;
}
error_log /var/log/nginx/default_error.log;
access_log /var/log/nginx/default_access.log;
}
docker-compose.yml for defaultWebsite:
services:
nginx:
build: DockerConfig/nginx
working_dir: /app
volumes:
- .:/app
- ./log:/log
- ./data/nginx/htpasswd:/etc/nginx/.htpasswd
- ./data/nginx/nginx_dev.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php-fpm
- mysql
links:
- php-fpm
- mysql
ports:
- "31080:80"
- "31443:443"
expose:
- "31080"
- "31443"
environment:
VIRUAL_HOST: "default.local"
APP_FRONT_CONTROLLER: "public/index.php"
networks:
default:
aliases:
- default
php-fpm:
build: DockerConfig/php-fpm
working_dir: /app
volumes:
- .:/app
- ./log:/log
- ./data/php-fpm/php-ini-overrides.ini:/etc/php/7.3/fpm/conf.d/99-overrides.ini
ports:
- "30902:9000"
expose:
- "30902"
extra_hosts:
- "default.local:127.0.0.1"
networks:
- default
environment:
XDEBUG_CONFIG: "remote_host=172.29.0.1 remote_enable=1 remote_autostart=1 idekey=\"PHPSTORM\" remote_log=\"/var/log/xdebug.log\""
PHP_IDE_CONFIG: "serverName=default.local"
docker-compose.yml for nginxReverseProxy:
services:
reverse_proxy:
build: DockerConfig/nginx
hostname: reverseProxy
ports:
- 80:80
- 443:443
extra_hosts:
- "host.docker.internal:127.0.0.1"
volumes:
- ./data/nginx/dev/default_dev.conf:/etc/nginx/conf.d/default.conf
- ./data/certs:/etc/ssl/private/
docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6e9a8479e6f8 default_nginx "nginx -g 'daemon of…" 12 hours ago Up 12 hours 31080/tcp, 31443/tcp, 0.0.0.0:31080->80/tcp, 0.0.0.0:31443->443/tcp default_nginx_1
5e1df4d6f1f5 default_php-fpm "/usr/sbin/php-fpm7.…" 12 hours ago Up 12 hours 30902/tcp, 0.0.0.0:30902->9000/tcp default_php-fpm_1
f3ec76cd7148 default_mysql "/entrypoint.sh mysq…" 12 hours ago Up 12 hours (healthy) 33060/tcp, 0.0.0.0:31336->3306/tcp default_mysql_1
d633511bc6a8 proxy_reverse_proxy "/bin/sh -c 'exec ng…" 12 hours ago Up 12 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp proxy_reverse_proxy_1
If i access directly default.local:31443 i can see the page working.
When i try to access http://default.local it redirects me to https://default.local but in the same time i get this error:
reverse_proxy_1 | 2020/04/14 15:22:43 [error] 6#6: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.80.1, server: default.local, request: "GET / HTTP/1.1", upstream: "https://127.0.0.1:31443/", host: "default.local"
Not sure this is the answer, but the writing is to too long for a comment.
On your nginx conf, you have:
upstream default_local {
server host.docker.internal:31443;
}
and as i see it (could be wrong here;), you have a different container accessing it:
extra_hosts:
- "host.docker.internal:127.0.0.1"
but you set the hostname to 127.0.0.1, shouldn't it be the docker host ip. Since it is connecting to a different container?
In general ensure the docker host ip is used on all containers, when they need to connect to another container/outside.
ok, so it seems that the docker ip should be used on linux machines because this "host.docker.internal" variable does not exists yet (to be added in a future version)
to get docker ip in linux should be enough to run ip addr | grep "docker"
so the final config should look something like this for reverse_proxy default.conf:
upstream default_name {
server 172.17.0.1:52443;
}
#redirect to https
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
server_name default.localhost;
listen 443 ssl http2;
large_client_header_buffers 4 16k;
ssl_certificate /etc/ssl/private/localhost/whatever_dev.crt;
ssl_certificate_key /etc/ssl/private/localhost/whatever_dev.key;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
index index.php index.html index.htm index.nginx-debian.html;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $proxy_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://default_name;
}
}
Related
I have a problem with my Docker configuration.
I would like to set a DNS name on an Ubuntu server 22.04. However, this is not achieved. I run my applications via local images. The proxy pass runs via a NGINX container. I can reach the ports without problems, but as soon as I set the DNS domain, it is not reached.
docker-compose.yml
version: "3.9"
services:
test_proxy:
network_mode: bridge
image: nginx
tty: true
container_name: test_nginx
ports:
- '80:80'
- '443:443'
depends_on:
- test_client
- test_backend
networks:
- test-network
test-client:
image: test_client
container_name: test_client
ports:
- '1000:80'
restart: 'always'
networks:
- test-network
test_backend:
image: test_backend
restart: 'always'
ports:
- "2000:2000"
expose:
- "2000"
depends_on:
- test_db
networks:
- test-network
test_db:
image: mongo
ports:
- "27017:27017"
container_name: test_db
volumes:
- /test/test-db
- test_data:/test_db
networks:
- test-network
restart: always
volumes:
test_data:
networks:
test-network:
driver: bridge
nginx.conf in nginx container
upstream test_client {
server test_client:80;
}
upstream test_backend {
server test_backend:80;
}
server {
listen 80;
listen [::]:80;
server_name test.eu www.test.eu;
# ssl_certificate /run/secrets/ssl_cert;
# ssl_certificate_key /run/secrets/ssl_key;
# ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers HIGH:!aNULL:!MD5;
index index.html index.htm index.nginx-debian.html;
add_header Cache-Control 'no-store, no-cache';
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
expires 0;
charset utf-8;
root /usr/share/nginx/html;
resolver 127.0.0.11 valid=10s;
set $session_name nginx_session;
location ~ /\.(?!well-known).* {
deny all;
}
# frontend
location / {
proxy_pass http://test_client$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/quasar.myapp.com-error.log error;
# backend
location /api {
proxy_pass http://test_backend$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen 443;
listen [::]:443;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
I have already tried to set the resolver to resolver 127.0.0.11 ipv6=off valid=10s;
I am using docker-compose to build containers and to serve the frontend of my website at https:// example.com and the backend at a subdomain, https:// api.example.com. The SSL certificates for both the root and subdomain are working properly, and I can access the live site (static files served by Nginx) at https:// example.com so at least half of the configuration is working properly. The problem occurs when the frontend tries to communicate with the backend. All calls are met with a "No 'Access-Control-Allow-Origin'" 502 Error in the console logs. In the logs of the docker container, this is the error response.
Docker Container Error
2022/03/09 19:01:21 [error] 30#30: *7 connect() failed (111: Connection refused) while connecting
to upstream, client: xxx.xx.xxx.xxx, server: api.example.com, request: "GET /api/services/images/
HTTP/1.1", upstream: "http://127.0.0.1:8000/api/services/images/",
host: "api.example.com", referrer: "https://example.com/"
I think it's likely that something is wrong with my Nginx or docker-compose configuration. When setting the SECURE_SSL_REDIRECT, SECURE_HSTS_INCLUDE_SUBDOMAINS, and the SECURE_HSTS_SECONDS to False or None (in the Django settings) I am able to hit http:// api.example.com:8000/api/services/images/ and get the data I am looking for. So it is running and hooked up, just not taking requests from where I want it to be. I've attached the Nginx configuration and the docker-compose.yml. Please let me know if you need more info, I would greatly appreciate any input, and thanks in advance for the help.
Nginx-custom.conf
# Config for the frontend application under example.com
server {
listen 80;
server_name example.com www.example.com;
if ($host = www.example.com) {
return 301 https://$host$request_uri;
}
if ($host = example.com) {
return 301 https://$host$request_uri;
}
return 404;
}
server {
server_name example.com www.example.com;
index index.html index.htm;
add_header Access-Control-Allow-Origin $http_origin;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Headers $http_access_control_request_headers;
add_header Access-Control-Allow-Methods $http_access_control_request_method;
location / {
root /usr/share/nginx/html;
try_files $uri /index.html =404;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
##### Config for the backend server at api.example.com
server {
listen 80;
server_name api.example.com;
return 301 https://$host$request_uri;
}
server {
server_name api.example.com;
add_header Access-Control-Allow-Origin $http_origin;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Headers $http_access_control_request_headers;
add_header Access-Control-Allow-Methods $http_access_control_request_method;
location / {
proxy_pass http://127.0.0.1:8000/; #API Server
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
Docker-Compose File
version: '3.9'
# services that make up the development env
services:
# DJANGO BACKEND
backend:
container_name: example-backend
restart: unless-stopped
image: example-backend:1.0.1
build:
context: ./backend/src
dockerfile: Dockerfile
command: gunicorn example.wsgi:application --bind 0.0.0.0:8000
ports:
- 8000:8000
environment:
- SECRET_KEY=xxx
- DEBUG=0
- ALLOWED_HOSTS=example.com,api.example.com,xxx.xxx.xxx.x
- DB_HOST=postgres-db
- DB_NAME=xxx
- DB_USER=xxx
- DB_PASS=xxx
- EMAIL_HOST_PASS=xxx
# sets a dependency on the db container and there should be a network connection between the two
networks:
- db-net
- shared-network
links:
- postgres-db:postgres-db
depends_on:
- postgres-db
# POSTGRES DATABASE
postgres-db:
container_name: postgres-db
image: postgres
restart: always
volumes:
- example-data:/var/lib/postgresql/data
ports:
- 5432:5432
environment:
- POSTGRES_DB=exampledb
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
networks:
- db-net
# ANGULAR & NGINX FRONTEND
frontend:
container_name: example-frontend
build:
context: ./frontend
ports:
- "80:80"
- "443:443"
networks:
- shared-network
links:
- backend
depends_on:
- backend
networks:
shared-network:
driver: bridge
db-net:
volumes:
example-data:
I have a site running in docker with 4 containers, a react front end, .net backend, sql database and nginx server. My docker compose file looks like this:
version: '3'
services:
sssfe:
image: mydockerhub:myimage-fe-1.3
ports:
- 9000:9000
volumes:
- sssfev:/usr/share/nginx/html
depends_on:
- sssapi
sssapi:
image: mydockerhub:myimage-api-1.3
environment:
- SQL_CONNECTION=myconnection
ports:
- 44384:44384
depends_on:
- jbdatabase
jbdatabase:
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=mypass
volumes:
- dbdata:/var/opt/mssql
ports:
- 1433:1433
reverseproxy:
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- "80:80"
- "443:443"
volumes:
- example_certbot-etc:/etc/letsencrypt
links :
- sssfe
certbot:
depends_on:
- reverseproxy
image: certbot/certbot
container_name: certbot
volumes:
- example_certbot-etc:/etc/letsencrypt
- sssfev:/usr/share/nginx/html
command: certonly --webroot --webroot-path=/usr/share/nginx/html --email myemail --agree-tos --no-eff-email --force-renewal -d example.com -d www.example.com
volumes:
example_certbot-etc:
external: true
dbdata:
sssfev:
I was following this link and am using cerbot and letsencrypt for the certificate. My nginx conf file is this:
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
location / {
rewrite ^ https://$host$request_uri? permanent;
}
location ~ /.well-known/acme-challenge {
allow all;
root /usr/share/nginx/html;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
index index.html index.htm;
root /usr/share/nginx/html;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/nginx/conf.d/options-ssl-nginx.conf;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
My issue is that https doesn't work for my site. When I hit https://example.com, I get ERR_CONNECTION_REFUSED. The non https site resolves and works fine however. Can't figure out what's going on. It looks like the ssl port is open and nginx is listening to it:
ss -tulpn | grep LISTEN
tcp LISTEN 0 128 *:9000 *:* users:(("docker-proxy",pid=18336,fd=4))
tcp LISTEN 0 128 *:80 *:* users:(("docker-proxy",pid=18464,fd=4))
tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=420,fd=4))
tcp LISTEN 0 128 *:1433 *:* users:(("docker-proxy",pid=18152,fd=4))
tcp LISTEN 0 128 *:443 *:* users:(("docker-proxy",pid=18452,fd=4))
tcp LISTEN 0 128 *:44384 *:* users:(("docker-proxy",pid=18243,fd=4))
And my containers:
reverseproxy 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp
sssfe 80/tcp, 0.0.0.0:9000->9000/tcp
sssapi 0.0.0.0:44384->44384/tcp
database 0.0.0.0:1433->1433/tcp
I'm assuming it's an issue with my nginx config, but I'm new to this and not sure where to go from here.
If you need to support SSL, please do this:
mkdir /opt/docker/nginx/conf.d -p
touch /opt/docker/nginx/conf.d/nginx.conf
mkdir /opt/docker/nginx/cert -p
then
vim /opt/docker/nginx/conf.d/nginx.conf
If you need to force the redirection to https when accessing http:
server {
listen 443 ssl;
server_name example.com www.example.com; # domain
# Pay attention to the file location, starting from /etc/nginx/
ssl_certificate 1_www.example.com_bundle.crt;
ssl_certificate_key 2_www.example.com.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE;
ssl_prefer_server_ciphers on;
client_max_body_size 1024m;
location / {
proxy_set_header HOST $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#Intranet address
proxy_pass http://172.17.0.8:9090; #change it
}
}
server {
listen 80;
server_name example.com www.example.com; # The domain name of the binding certificate
#HTTP domain name request is converted to https
return 301 https://$host$request_uri;
}
docker run -itd --name nginx -p 80:80 -p 443:443 -v /opt/docker/nginx/conf.d/nginx.conf:/etc/nginx/conf.d/nginx.conf -v /opt/docker/nginx/cert:/etc/nginx -m 100m nginx
After startup, enter docker ps to see if the startup is successful
docker logs nginx view logs.
I run and application and nginx in docker containers and struggle to get it running:
This is my nginx.conf
server {
server_name data-mastery.com; # managed by Certbot
location / {
proxy_pass http://shinyproxy:4000;
}
listen 443 ssl; # managed by Certbot
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate /etc/nginx/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/nginx/privkey.pem; # managed by Certbot
include /etc/nginx/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/nginx/ssl-dhparams.pem; # managed by Certbot
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 600s;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
server {
if ($host = data-mastery.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 ;
listen [::]:80 ;
server_name data-mastery.com;
return 404; # managed by Certbot
}
This is my docker-compose file:
version: "3.7"
services:
shinyproxy:
build: ./shinyproxy
container_name: shinyproxy
expose:
- 4000
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "./shinyproxy-logs/server:/log"
- "./shinyproxy-logs/container:/container-logs"
- "./shinyproxy:/opt/shinyproxy"
nginx:
build: ./nginx
container_name: nginx
restart: always
depends_on:
- shinyproxy
ports:
- "80:80"
- "443:443"
Building the images worked correctly, however when I run docker-compose up, nginx spits out the following error nginx | 2021/06/27 07:16:37 [error] 22#22: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 95.157.1.112
I thought this might happen, because nginx should have to wait for the shinyproxy app, so I added a depends-on, but this did not solve my issue.
Does anyknow know where my Problem is?
Ok, as so often, when you ask a question, you find the answer 5 minutes later:
The shinyproxy application has got a yaml file where you also have to set a port which I had to overwrite:
proxy:
port: 4000
I have environment builded upon docker containers (in boot2docker). I have following docker-compose.yml file to quickly setup nginx and nexus servers :
version: '3.2'
services:
nexus:
image: stefanprodan/nexus
container_name: nexus
ports:
- 8081:8081
- 5000:5000
nginx:
image: nginx:latest
container_name: nginx
ports:
- 5043:443
volumes:
- /opt/dm/nginx2/nginx.conf:/etc/nginx/nginx.conf:ro
Nginx has following configuration (nginx.conf)
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
proxy_send_timeout 120;
proxy_read_timeout 300;
proxy_buffering off;
keepalive_timeout 5 5;
tcp_nodelay on;
server {
listen 80;
server_name demo.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name demo.com;
# allow large uploads of files - refer to nginx documentation
client_max_body_size 1024m;
# optimize downloading files larger than 1G - refer to nginx doc before adjusting
#proxy_max_temp_file_size 2048m
#ssl on;
#ssl_certificate /etc/nginx/ssl.crt;
#ssl_certificate_key /etc/nginx/ssl.key;
location / {
proxy_pass http://nexus:8081/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
}
}
Nexus seems to work very well. I call sucessfully curl http://localhost:8081 on docker host machine. This return me html of nexus login site. Now I want to try nginx server. It is configured to listen on 443 port, but SSL is right now disabled (I wanted to test it before diving into SSL configuration). As you can notice, my ngix container maps port 443 to port 5043. Thus, I try to use following curl command : curl -v http://localhost:5043/. Now I expect that my http request is going to be send to nginx and proxied to proxy_pass http://nexus:8081/; nexus. Nexus hostname is visible within docker container network and is accesible from nginx container. Unfortunately in reponse I receive :
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 5043 (#0)
> GET / HTTP/1.1
> Host: localhost:5043
> User-Agent: curl/7.49.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
I was checking nginx logs, error, access but these logs are empty. Can somebody help me solving this problem ? It should be just a simple example of proxying requests, but maybe I misunderstand some concept ?
Do you have an upstream directive in your nginx conf (placed within the http directive)?
upstream nexus {
server <Nexus_IP>:<Nexus_Port>;
}
Only then nginx can correctly resolve it. The docker-compose service name nexus is not injected to the nginx container on runtime.
You can try links in docker-compose:
https://docs.docker.com/compose/compose-file/#links
This gives you an alias for the linked container in your /etc/hosts. But you still need an upstream directive. Update: If resolvable, you can as well use the names directly in nginx directives like location.
https://serverfault.com/questions/577370/how-can-i-use-environment-variables-in-nginx-conf
As #arnold's answer you are missing the upstream configuration in your nginx. I saw you are using the stefanprodan nexus image, see his blog for the full configuration. Below you can find mine (remember to open ports 8081 and 5000 of nexus even the entrance point is the 443). Besides you need to include the certificate because docker client requires ssl working:
worker_processes 2;
events {
worker_connections 1024;
}
http {
error_log /var/log/nginx/error.log warn;
access_log /dev/null;
proxy_intercept_errors off;
proxy_send_timeout 120;
proxy_read_timeout 300;
upstream nexus {
server nexus:8081;
}
upstream registry {
server nexus:5000;
}
server {
listen 80;
listen 443 ssl default_server;
server_name <yourdomain>;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
ssl_certificate /etc/letsencrypt/live/<yourdomain>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<yourdomain>/privkey.pem;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
keepalive_timeout 5 5;
proxy_buffering off;
# allow large uploads
client_max_body_size 1G;
location / {
# redirect to docker registry
if ($http_user_agent ~ docker ) {
proxy_pass http://registry;
}
proxy_pass http://nexus;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
}
}
The certificates are generated using letsencrypt or certbot. The rest of the configuration is to have an A+ in ssllabs analysis as it is explained here
Your docker-compose of 5000 port is an dynamic port(Because it hadn't been exposed ) , so you cannot connect the 5000 port because the
ports:
- 8081:8081
- 5000:5000
are not efficient .
you can use like this:
Build a new Dockerfile and expose 5000 port (Mine name is )
FROM sonatype/nexus3:3.16.2
EXPOSE 5000```
Use new image to start the container and publish the port .
version: "3.7"
services:
nexus:
image: 'feibor/nexus:3.16.2-1'
deploy:
placement:
constraints:
- node.hostname == node1
restart_policy:
condition: on-failure
ports:
- 8081:8081/tcp
- 5000:5000/tcp
volumes:
- /mnt/home/opt/nexus/nexus-data:/nexus-data:z