I setup a test environment to place docker, nginx before grpc server. And below are my configurations
docker-compose
version: '3.8'
services:
web:
build: .
command: gunicorn --timeout 100 --workers 2 --threads 4 django_root.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/public/django_root/static
expose:
- 8000
env_file:
- ./.env.dev
grpc:
build: .
command: python manage.py grpcrunserver 0.0.0.0:50051
env_file:
- ./.env.dev
nginx:
build:
context: ./nginx
dockerfile: Dockerfile-secure
volumes:
- static_volume:/public/django_root/static
ports:
- 1337:80
- 443:50052
depends_on:
- web
- grpc
volumes:
static_volume:
Dockerfile-secure
FROM nginx:1.19.0-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx-secure.conf /etc/nginx/conf.d
nginx-secure.conf
upstream django_root {
server web:8000;
}
server {
listen 80;
location / {
proxy_pass http://django_root;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /public/django_root/static/;
}
}
log_format compression '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
server {
listen 50052 ssl http2;
ssl_certificate /etc/nginx/server.crt;
ssl_certificate_key /etc/nginx/server.key;
access_log /var/log/nginx/a.log;
error_log /var/log/nginx/e.log;
location / {
grpc_pass grpc://grpc:50051;
}
}
The problem I hit is port 443 not working as I setup above in docker-compose file, but if I replace it with 8443, then my client can talk with grpc server. The error I can see from my client for port 443 use
case is below
E0211 15:08:05.178000000 22572 src/core/tsi/ssl_transport_security.cc:1439] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED.
I use self-signed certificate for this test environment on localhost, could this be the problem?
I do not see 443 been disallowed for this case in neither nginx site or docker site. Need help on this, and in case 443 not allowed for this case, please refer me to the document.
Turns out it's certificate itself. Replacing self-signed certificate with let'sencrypt one and deploy to aws makes port 443 working.
Related
I created a Github repo weeks ago with Docker Compose, Odoo, PostgreSQL, Certbot, Nginx as a proxy server, and a little bit of PHP stuff (Symfony) -> https://github.com/Inushin/dockerOdooSymfonySSL When I was trying the config I found that NGINX worked as it was supposed to and you get the correct HHTP -> HTTPS redirect, BUT if you put the port 8069, the browser goes to HTTP. One of the solutions should be configured de another VPC, but I was thinking about using this repo for other "minimal VPS services" and not needing another VPC, so... how could I solve this? Maybe from Odoo config? Is something missing in the NGINX conf?
NGINX
#FOR THE ODOO DOMAIN
server {
listen 80;
server_name DOMAIN_ODOO;
server_tokens off;
location / {
return 301 https://$server_name$request_uri;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name DOMAIN_ODOO;
server_tokens off;
location / {
proxy_pass http://web:8069;
proxy_set_header Host DOMAIN_ODOO;
proxy_set_header X-Forwarded-For $remote_addr;
}
ssl_certificate /etc/letsencrypt/live/DOMAIN_ODOO/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/DOMAIN_ODOO/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
docker-compose.yml
nginx:
image: nginx:1.15-alpine
expose:
- "80"
- "443"
ports:
- "80:80"
- "443:443"
networks:
- default
volumes:
- ./data/nginx:/etc/nginx/conf.d/:rw
- ./data/certbot/conf:/etc/letsencrypt/:rw
- ./data/certbotSymfony/conf:/etc/letsencrypt/symfony/:rw
- ./data/certbotSymfony/www:/var/www/certbot/:rw
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
web:
image: odoo:13.0
depends_on:
- db
ports:
- "8069:8069/tcp"
volumes:
- web-data:/var/lib/odoo
- ./data/odoo/config:/etc/odoo
- ./data/odoo/addons:/mnt/extra-addons
I have a working redirect from HTTP traffic to HTTPS traffic. The environment consists of a flask application in a docker container that is being routed through an NGINX docker container. Below is the nginx.conf file. After running docker-compose up I am able to get the containers active. After running curl localhost, I am getting a 301 Moved Permanently. However, when running curl https://localhost, I am getting a curl: (7) Failed to connect to localhost port 443: Connection refused. I checked my local computer network settings on macOS Big Sur, and the firewall is turned off (any traffic should be allowed in). I'm not sure what else I need to do to get this to work. I have also exposed the port 443 in the docker-compose file for the nginx container. Any advice would be helpful.
NGINX.conf
http {
upstream flask {
server app:8000;
}
server {
listen 80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name localhost;
ssl_certificate /etc/nginx/nginx/files/localhost.crt;
ssl_certificate_key /etc/nginx/nginx/files/localhost.key;
location / {
proxy_pass http://flask;
proxy_set_header Host "localhost";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
Docker Compose
version: "3"
services:
app:
container_name: app
command: gunicorn --bind 0.0.0.0:8000 --workers 2 "app.server:app"
volumes:
- ./:/var/www
build: app
ports:
- 8000:8000
networks:
MyNetwork:
aliases:
- flask
nginx:
container_name: nginx
build: nginx
volumes:
- ./:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80
expose:
- "443"
networks:
- MyNetwork
networks:
MyNetwork:
At first you should add port 443 to your docker-compose like this:
ports:
- 80:80
- 443:443
Then you should volume /etc/nginx/nginx/files:
volumes:
- ./:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/nginx/files:/etc/nginx/nginx/files
I am trying to deploy a simple Django Rest Framework app to the production server using Docker. My aim is to install Nginx with a proxy and Certbot for a regular Let'sEncrypt SSL at the same time. I manage my dependencies in DockerFiles and docker-compose.
So the folder structure has the following view:
app
DockerFile
nginx
DockerFile
init-letsencrypt.sh
nginx.conf
docker-compose.yml
My idea is to hold all the configs in app/docker-compose.yml and start many different instances from the same source. But I do not have any nginx or certbot config in app/DockerFile - that's only for Django Rest Framework and that works well. But in docker-compose.yml I have the following code:
version: '3'
'services':
app:
container_name: djangoserver
command: gunicorn prototyp.wsgi:application --env DJANGO_SETTINGS_MODULE=prototyp.prod_settings --bind 0.0.0.0:8000 --workers=2 --threads=4 --worker-class=gthread
build:
context: ./api
dockerfile: Dockerfile
restart: always
ports:
- "8000:8000"
depends_on:
- otherserver
otherserver:
container_name: otherserver
build:
context: ./otherserver
dockerfile: Dockerfile
restart: always
nginx:
build: ./nginx
ports:
- 80:80
depends_on:
- app
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
This makes me to build "app", "otherserver", "nginx" and "certbot".
The most important parts are in "nginx" folder.
I used this manual and cloned file "init-letsencrypt.sh" from the source just the way it was described. Then I tried to bash it:
nginx/DockerFile:
FROM nginx:1.19.0-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
RUN mkdir -p /usr/src/app
COPY init-letsencrypt.sh /usr/src/app
WORKDIR /usr/src/app
RUN chmod +x init-letsencrypt.sh
ENTRYPOINT ["/usr/src/app/init-letsencrypt.sh"]
In nginx/nginx.conf I have the following code:
upstream django {
server app:8000;
}
server {
listen 80;
server_name app.com www.app.com;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name app.com www.app.com;
access_log /var/log/nginx-access.log;
error_log /var/log/nginx-error.log;
ssl_certificate /etc/letsencrypt/live/app.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location ^/static/rest_framework/((img/|css/|js/|fonts).*)$ {
autoindex on;
access_log off;
alias /usr/src/app/static/rest_framework/$1;
}
location / {
proxy_pass http://django;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_body_buffer_size 256k;
proxy_connect_timeout 120;
proxy_send_timeout 120;
proxy_read_timeout 120;
proxy_buffer_size 64k;
proxy_buffers 4 64k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
client_max_body_size 100M;
}
}
So, with this configuration when I do "docker-compose build", the build works without any errors and everything is successfully built. But as soon as I do "docker-compose up" I have the problem that certbot and nginx are not connect and the app is working only when I use http://app.com:8000 instead of https://app.com.
In console I do not have any errors.
What do I do wrong? What have I missed? Any help will be appreciated.
I see in your setup you try to run let's encrypt from within the nginx container. But I believe there are two better way that I describe in details here and here.
The idea behind the first method is to have a docker-compose file to initiate the letsencrypt certificate, and another docker-compose file to run the system and renew the certificate.
So without further ado, here is the file structure and content that is working really well for me (you still need to adapt the files to suit your needs)
./setup.sh
./docker-compose-initiate.yaml
./docker-compose.yaml
./etc/nginx/templpates/default.conf.template
./etc/nginx/templpates-initiation/default.conf.template
The setup in 2 phases:
In the first phase "the initiation phase" we will run an nginx container, and a certbot container just to obtain the ssl certificate for the first time and store it on the host ./etc/letsencrypt folder
I the second phase "the operation phase" we run all necessary services for the app including nginx that will use the letsencrypt folder this time to serve https on port 443, a certbot container will also run (on demand) to renew the certificate. We can add a cron job for that. So the setup.sh script is a simple convenience script that runs the commands one after another:
#!/bin/bash
# the script expects two arguments:
# - the domain name for which we are obtaining the ssl certificatee
# - the Email address associated with the ssl certificate
echo DOMAIN=$1 >> .env
echo EMAIL=$2 >> .env
# Phase 1 "Initiation"
docker-compose -f ./docker-compose-first.yaml up -d nginx
docker-compose -f ./docker-compose-first.yaml up certbot
docker-compose -f ./docker-compose-first.yaml down
# Phase 2 "Operation"
crontab ./etc/crontab
docker-compose -f ./docker-compose.yaml up -d
Phase 1: The ssl certificate initiation phase:
./docker-compose-initiate.yaml
version: "3"
services:
nginx:
container_name: nginx
image: nginx:latest
environment:
- DOMAIN
ports:
- 80:80
volumes:
- ./etc/nginx/templates-initiate:/etc/nginx/templates:ro
- ./etc/letsencrypt:/etc/letsencrypt:ro
- ./certbot/data:/var/www/certbot
certbot:
container_name: certbot
image: certbot/certbot:latest
depends_on:
- nginx
command: >-
certonly --reinstall --webroot --webroot-path=/var/www/certbot
--email ${EMAIL} --agree-tos --no-eff-email
-d ${DOMAIN}
volumes:
- ./etc/letsencrypt:/etc/letsencrypt
- ./certbot/data:/var/www/certbot
./etc/nginx/templates-initiate/default.conf.template
server {
listen [::]:80;
listen 80;
server_name $DOMAIN;
location ~/.well-known/acme-challenge {
allow all;
root /var/www/certbot;
}
}
Phase 2: The operation phase
./docker-compose.yaml
services:
app:
{{your_configurations_here}}
{{other_services...}}:
{{other_services_configuraitons}}
nginx:
container_name: nginx
image: nginx:latest
restart: always
environment:
- DOMAIN
depends_on:
- app
ports:
- 80:80
- 443:443
volumes:
- ./etc/nginx/templates:/etc/nginx/templates:ro
- ./etc/letsencrypt:/etc/letsencrypt
- ./certbot/data:/var/www/certbot
- /var/log/nginx:/var/log/nginx
certbot:
container_name: certbot
image: certbot/certbot:latest
depends_on:
- nginx
command: >-
certonly --reinstall --webroot --webroot-path=/var/www/certbot
--email ${EMAIL} --agree-tos --no-eff-email
-d ${DOMAIN}
volumes:
- ./etc/letsencrypt:/etc/letsencrypt
- ./certbot/data:/var/www/certbot
./etc/nginx/templates/default.conf.template
server {
listen [::]:80;
listen 80;
server_name $DOMAIN;
return 301 https://$host$request_uri;
}
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name $DOMAIN;
ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://app:80;
}
}
The second method uses two docker images: http-proxy and http-proxy-acme-companion that were developed specifically for this reason. I suggest looking at the blog post for further details.
As I see, you havenot exposed port 443 for nginx container:
nginx:
build: ./nginx
ports:
- 80:80
- 443:443
depends_on:
Add more 443 port.
First time using docker-compose. Attempting to set up a Nginx container as a webserver and a container that holds my dotnetcore app. The intention is for nginx to pass the call onto Kestrel. Both images build and run but getting error when accessing "http://localhost:8080":
proxy_1 | 2019/05/12 16:39:45 [error] 6#6: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://172.19.0.2:4000/", host: "localhost:8080"
The project structure is as follows:
Dockersingleproject
Dockersingleproject/ (dotnetcore app)
*app files'
DockerFile
Nginx/
nginx.conf
DockerFile
docker-compose.yml
I am under the impression that the issue is regarding the connection between the web server container and the app container is refusing but I cannot figure out why. Below is the app Dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim AS base
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /Dockersingleproject
COPY bin/Debug/netcoreapp2.2/publish .
ENV ASPNETCORE_URLS http://+:4000
EXPOSE 4000
ENTRYPOINT ["dotnet", "Dockersingleproject.dll"]
The app docker file is exposing port 4000. The nginx.conf:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream docker-nginx {
server app:4000;
}
server {
listen 8080;
location / {
proxy_pass http://docker-nginx;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_buffers 8 16k; # Buffer pool = 8 buffers of 16k
proxy_buffer_size 16k; # 16k of buffers from pool used for headers
}
}
}
The server is listening on port 8080 and proxies the request to port 4000 on server "app". Which is defined in the docker-compose file:
version: '2'
services:
app:
build:
context: ./Dockersingleproject
dockerfile: Dockerfile
ports:
- "4000:4000"
proxy:
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- "8080:8080"
links:
- app
The app service maps port 4000 requests to 4000 and in my head this should be working.
The IP of the nginx container is: 172.19.0.3
The IP of the app container is: 172.19.0.2
Please let me know where my confusion lies. I am on the point of accusing my PC of being the issue. Any information is appreciated.
Getting Connection refused when accessing the site resulting in a nginx 502 bad gateway
This uses the Microsoft-provided ASP.NET Core sample:
docker-compose.yaml:
version: "3"
services:
app:
image: mcr.microsoft.com/dotnet/core/samples:aspnetapp
expose:
- "80"
proxy:
image: nginx
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- "8080:80"
NB
The ASP.NET Core sample runs on :80 and is expose'd
The Nginx container also runs on :80 and is exposed on the host on :8080
nginx.conf:
events {}
http {
server {
listen 80;
location / {
proxy_pass http://app:80;
}
}
}
NB
Nginx listens on :80 because its container requires it
The proxy configuration references the service name (app) on :80
And:
curl \
--silent \
--write-out "%{http_code}" \
--output /dev/null \
http://localhost:8080
200
I have created a docker-compose file to spin up both an nginx and tomcat image. I use volumised files such /etc/nginx/nginx.conf and /etc/nginx/conf.d/app.conf
Same for Tomcat but with xml config files and webapps.
Both spin up and run fine… on their own. I can browse to Nginx and get the welcom page and the same for Tomcat on their respective ports, 81/8080.
However I cannot proxy the request to the backend tomcat. I’ll admit, I’m Apache and have been for years but I need to experiment.
My nginx.conf hasnt changed, its still default. I have an app.conf for the tomcat application (below). I do try and CMD mv the default.conf in teh tomcat Dockerfile but it still remains along side my app.conf so that maybe causing the issue?
my app.conf config is here: (apologies, couldnt get the code to output properly)
"server {
listen *:81;
set $allowOriginSite *;
proxy_pass_request_headers on;
proxy_pass_header Set-Cookie;
access_log /var/log/nginx/access.log combined;
error_log /var/log/nginx/error.log error;
# Upload size unlimited
client_max_body_size 0;
location /evf {
proxy_pass http://tomcat:8080;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Set-Cookie;
}
}
tomcat:8080 being the name of the service in my docker-compose file.
Any help would be appreciated!
Thank you,
Craig
docker-compose.yml for reference;
version: '3'
services:
nginx:
build: ./nginx
image: nginx:evf
command: nginx -g "daemon off;"
networks:
- evf
container_name: evf-nginx
volumes:
- ./volumes/config/nginx-evf.conf:/etc/nginx/conf.d/nginx-evf.conf
- ./volumes/config/default.conf.disabled:/etc/nginx/conf.d/default.conf.disabled
ports:
- "81:80"
tomcat:
image: tomcat
working_dir: /usr/local/tomcat
volumes:
- ./volumes/config/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml
- ./volumes/webapps/EVF.war:/usr/local/tomcat/webapps/EVF.war
networks:
- evf
container_name: evf-tomcat
ports:
- "8080:8080" #expose 8080 externally to test connectivity.
networks:
evf:
Thanks,
In your nginx conf you have listen *:81 but you are exposing port 80 with "81:80".
So eiter expose port 81 with "81:81" or change you nginx config to listen *:80.
If the second option does not work try to replace the original nginx config by changing the volume file in your docker-compose.yml:
volumes:
- ./nginx/nginx-evf.conf:/etc/nginx/conf.d/default.conf