I have a docker project using both nginx and let's encrypt with certbot.
I'm quite new to docker, and for my nginx image, I'm using the feature that is helping me to use environment variables in nginx configuration.
On a famous tutorial, they use this command to start nginx and reload it automatically
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
The problem is that when I'm using this command instead of the basic one my template is no longer converted to a working nginx conf.
I want to be able to reload nginx but I also want nginx to transform my template in a valid nginx configuration.
How can I achieve that please ?
Here is my docker-compose.yml
version: '3'
services:
db:
image: mysql
environment:
- MYSQL_DATABASE=${DATABASE_NAME}
- MYSQL_USER=${DATABASE_USER}
- MYSQL_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
ports:
- "3306:3306"
volumes:
- ./db/data:/var/lib/mysql
adminer:
image: adminer
restart: always
environment:
- ADMINER_DESIGN=lucas-sandery
ports:
- 8080:8080
php-fpm:
build:
context: ./php-fpm
depends_on:
- db
environment:
- APP_ENV=${APP_ENV}
- APP_SECRET=${APP_SECRET}
- DATABASE_URL=mysql://${DATABASE_USER}:${DATABASE_PASSWORD}#db:3306/${DATABASE_NAME}?serverVersion=5.7
volumes:
- ../app:/var/www
node:
image: node:alpine
volumes:
- ../app:/var/www
working_dir: /var/www
command: "/bin/sh -c 'yarn install ; yarn run watch'"
nginx:
image: nginx:alpine
volumes:
- ../app:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/templates:/etc/nginx/templates
- ./logs:/var/log
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
depends_on:
- php-fpm
ports:
- "80:80"
- "443:443"
environment:
- APP_DOMAIN=${APP_DOMAIN}
certbot:
image: certbot/certbot
volumes:
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
And my site.template.conf file
upstream php-upstream {
server php-fpm:9000;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name ${APP_DOMAIN};
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server ipv6only=on;
ssl_certificate /etc/letsencrypt/live/${APP_DOMAIN}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/${APP_DOMAIN}/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
server_name ${APP_DOMAIN};
root /var/www/public;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_pass php-upstream;
fastcgi_index index.php;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_read_timeout 600;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
Thanks
Problem is in this line, docker-entrypoint.sh wont trigger if your commands first argument is not nginx or nginx-debug, and yours is /bin/bash.
In order to run entrypoint as well change your command to:
"/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & /docker-entrypoint.sh nginx -g \"daemon off;\"'"
You can add a restart: always in the docker-compose.yml nginx section to handle service exits by errors, and also if you want to trigger it manually you can issue a docker-compose restart nginx in shell at the folder you have the yml file.
I don't know if you've intentionally left out of your question sensitive information like ${APP_DOMAIN}, but if you want to insert these values you can use docker secrets. You can read more about docker secrets here: https://docs.docker.com/engine/swarm/secrets/
(Hint: you don't need to use swarm to have secrets, at the end of this link there is a docker-compose example. As long as you start the services with it you don't need swarm).
EDIT
So the issue is you can't substitute environment variables on a nginx conf file. have you seen this: https://github.com/docker-library/docs/tree/master/nginx#using-environment-variables-in-nginx-configuration-new-in-119 ?
I found this information here: https://serverfault.com/a/1022249/80400
If you want to reload nginx at every 6h, why don't you use a crontab ? You can keep everything as it is and do in the crontab:
* */6 * * * cd /home/to/docker-compose.yml/ && docker-compose exec nginx -s reload > /var/log/reloading_nginx.log
(I think docker-compose exec is the right command instead of docker-compose run).
Related
I've been struggling to make changes to my docker app. After a lot of trial and error it looks what I thought was my nginx conf file might not actually be my nginx conf file.
I have determined this because I tried removing it entirely and my app runs the same with docker.
It looks like changes I make to my nginx service via the app.conf file have been having no impact on the rest of my app.
I am trying to understand if my volume mapping is correct. Here's my docker compose:
version: "3.5"
services:
collabora:
image: collabora/code
container_name: collabora
restart: always
cap_add:
- MKNOD
environment:
- "extra_params=--o:ssl.enable=false --o:ssl.termination=true"
- domain=mydomain\.com
- dictionaries=en_US
ports:
- "9980:9980"
volumes:
- ./appdata/collabora:/config
nginx:
image: nginx:1.15-alpine
restart: unless-stopped
depends_on:
- certbot
- collabora
volumes:
# - ./data/nginx:/etc/nginx/conf.d
- ./data/nginx:/etc/nginx
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
My projects directory is:
/my_project
docker-compose.yaml
data
nginx
app.conf
And then my app.conf has various nginx settings
server {
listen 80;
server_name example.com www.example.com;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
} ... more settings below
Assuming I'm correct that my app.conf file is not being used, how can I correctly map app.conf on local to the correct place in the nginx container?
nginx conf directory is
/etc/nginx/nginx.conf
This File has an include of all of contento from
/etc/nginx/conf.d/*
You can verify your nginx.conf in use executing a ps -ef | grep nginx on your container.
Verify your default.conf. Anyway, in your compose you must have:
volumes:
- ./data/nginx:/etc/nginx/conf.d
Try with absolute path
Regards
I have this dockerfile:
FROM nginx
COPY .docker/certificates/fullchain.pem /etc/letsencrypt/live/mydomain.com/fullchain.pem
COPY .docker/certificates/privkey.pem /etc/letsencrypt/live/mydomain.com/privkey.pem
COPY .docker/config/options-ssl-nginx.conf /etc/nginx/options-ssl-nginx.conf
COPY .docker/config/ssl-dhparams.pem /etc/nginx/ssl-dhparams.pem
COPY .docker/config/nginx.conf /etc/nginx/conf.d/default.conf
RUN chmod +r /etc/letsencrypt/live/mydomain.com/fullchain.pem
I have this in my nginx configuration:
server {
listen 443 ssl default_server;
server_name _;
# Why can't this file be found?
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
# ssl_certificate /etc/nginx/fullchain.pem;
# ssl_certificate_key /etc/nginx/privkey.pem;
include /etc/nginx/options-ssl-nginx.conf;
ssl_dhparam /etc/nginx/ssl-dhparams.pem;
...
}
Nginx crashes with:
[emerg] 7#7: cannot load certificate "/etc/letsencrypt/live/mydomain.com/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/mydomain.com/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
However, if I change the location of fullchain.pem and privkey.pem to, for example, /etc/nginx/fullchaim.pem and /etc/nginx/privkey.pem and update the nginx configuration, it does find the files and works as expected.
Here's the service definition in docker-compose.yml:
nginx-server:
container_name: "nginx-server"
build:
context: ../../
dockerfile: .docker/dockerfiles/NginxDockerfile
restart: on-failure
ports:
- "80:80"
- "443:443"
volumes:
- static-content:/home/docker/code/static
- letsencrypt-data:/etc/letsencrypt
- certbot-data:/var/www/certbot
depends_on:
- api
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
networks:
- api-network
- main
# Commented out to verify that the files aren't being deleted by certbot
# certbot:
# image: certbot/certbot
# container_name: "certbot"
# depends_on:
# - nginx-server
# restart: unless-stopped
# volumes:
# - letsencrypt-data:/etc/letsencrypt
# - certbot-data:/var/www/certbot
# entrypoint: "/bin/sh -c 'sleep 30s && trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
The intention is to use fullchain.pem as an initial certificate until one can be requested from let's encrypt. Note that, at this point, there is no certbot service, and the /etc/letsencrypt/live/mydomain.com directory is not referenced anywhere else at all (only in NginxDockerfile and nginx.conf), so it shouldn't be an issue of another service deleting the files. Rebuilding with --no-cache does not help.
Why can't nginx find the files in this specific location, but can find them if copied to a different location?
EDIT: As suggested, I ended up using a host volume instead. This didn't work when the host volume was located inside the repository (root_of_context/path/to/gitignored/directory/letsencrypt:/etc/letsencrypt, but did work with /etc/letsencrypt:/etc/letsencrypt, which I personally find ugly, but oh well.
Volumes are mounted on run, so after your container is built.
Since you mounted letsencrypt-data on /etc/letsencrypt, Nginx is going to look for your files into letsencrypt-data.
I don't know the purpose of this mount but I guess your container would succeed in running if you removed - letsencrypt-data:/etc/letsencrypt from volumes.
I am trying to find a way to publish nginx, express, and letsencrypt's ssl all together using docker-compose. There are many documents about this, so I referenced these and tried to make my own configuration, I succeed to configure nginx + ssl from this https://medium.com/#pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
So now I want to put sample nodejs express app into nginx + ssl docker-compose. But I don't know why, I get 502 Bad Gateway from nginx rather than express's initial page.
I am testing this app with my left domain, and on aws ec2 ubuntu16. I think there is no problem about domain dns and security rules settings. All of 80, 443, 3000 ports opened already. and When I tested it without express app it shows well nginx default page.
nginx conf in /etc/nginx/conf.d
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
server_tokens off;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
docker-compose.yml
version: '3'
services:
app:
container_name: express
restart: always
build: .
ports:
- '3000:3000'
nginx:
container_name: nginx
image: nginx:1.15-alpine
restart: unless-stopped
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
Dockerfile of express
FROM node:12.2-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
I think SSL works fine, but there are some problems between express app and nginx. How can I fix this?
proxy_pass http://localhost:3000
is proxying the request to the 3000 port on the container that is running nginx. What you instead want is to connect to the 3000 port of the container running express. For that, we need to do two things.
First, we make the express container visible to nginx container at a predefined hostname. We can use links in docker-compose.
nginx:
links:
- "app:expressapp"
Alternatively, since links are now considered a legacy feature, a better way is to use a user defined network. Define a network of your own with
docker network create my-network
and then connect your containers to that network in compose file by adding the following at the top level:
networks:
default:
external:
name: my-network
All the services connected to a user defined network can access each other via name without explicitly setting up links.
Then in the nginx.conf, we proxy to the express container using that hostname:
location / {
proxy_pass http://app:3000
}
Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link.
Define networks in your docker-compose.yml and configure your services with the appropriate network:
version: '3'
services:
app:
restart: always
build: .
networks:
- backend
expose:
- "3000"
nginx:
image: nginx:1.15-alpine
restart: unless-stopped
depends_on:
- app
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
networks:
- frontend
- backend
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
networks:
frontend:
backend:
Note: the app service no longer publish's it's ports to the host it only exposes port 3000 (ref. exposing and publishing ports), it is only available to services connected to the backend network. The nginx service has a foot in both the backend and frontend network to accept incoming traffic from the frontend and proxy the connections to the app in the backend (ref. multi-host networking).
With user-defined networks you can resolve the service name:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream app {
server app:3000 max_fails=3;
}
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
server_tokens off;
location / {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
}
Removing the container_name from your services makes it possible to scale the services: docker-compose up -d --scale nginx=1 app=3 - nginx will load balance the traffic in round-robin to the 3 app containers.
I think maybe a source of confusion here is the way the "localhost" designation behaves among running services in docker-compose. The way docker-compose orchestrates your containers, each of the containers understands itself to be "localhost", so "localhost" does not refer to the host machine (and if I'm not mistaken, there is no way for a container running on the host to access a service exposed on a host port, apart from maybe some security exploits). To demonstrate:
services:
app:
container_name: express
restart: always
build: .
ports:
- '2999:3000' # expose app's port on host's 2999
Rebuild
docker-compose build
docker-compose up
Tell container running the express app to curl against its own running service on port 3000:
$ docker-compose exec app /bin/bash -c "curl http://localhost:3000"
<!DOCTYPE html>
<html>
<head>
<title>Express</title>
<link rel='stylesheet' href='/stylesheets/style.css' />
</head>
<body>
<h1>Express</h1>
<p>Welcome to Express</p>
</body>
</html>
Tell app to try to that same service which we exposed on port 2999 on the host machine:
$ docker-compose exec app /bin/bash -c "curl http://localhost:2999"
curl: (7) Failed to connect to localhost port 2999: Connection refused
We will of course see this same behavior between running containers as well, so in your setup nginx was trying to proxy it's own service running on localhost:3000 (but there wasn't one, as you know).
Tasks
build NodeJS app
add SSL functionality from the box (that can work automatically)
Solution
https://github.com/evertramos/docker-compose-letsencrypt-nginx-proxy-companion
/ {path_to_the_project} /Docker-compose.yml
version: '3.7'
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
restart: always
container_name: nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
- ./vhost.d:/etc/nginx/vhost.d
- ./html:/usr/share/nginx/html
- ./conf.d:/etc/nginx/conf.d
ports:
- "443:443"
- "80:80"
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: letsencrypt
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./certs:/etc/nginx/certs:rw
- ./vhost.d:/etc/nginx/vhost.d:rw
- ./html:/usr/share/nginx/html:rw
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
api:
container_name: ${APP_NAME}
build:
context: .
dockerfile: Dockerfile
command: npm start --port ${APP_PORT}
expose:
- ${APP_PORT}
# ports:
# - ${APP_PORT}:${APP_PORT}
restart: always
environment:
VIRTUAL_PORT: ${APP_PORT}
VIRTUAL_HOST: ${DOMAIN}
LETSENCRYPT_HOST: ${DOMAIN}
LETSENCRYPT_EMAIL: ${LETSENCRYPT_EMAIL}
NODE_ENV: production
PORT: ${APP_PORT}
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
/ {path_to_the_project} /.env
APP_NAME=best_api
APP_PORT=3000
DOMAIN=api.site.com
LETSENCRYPT_EMAIL=myemail#gmail.com
Do not forget to connect DOMAIN to you server before you will run container there.
How it works?
just run docker-compose up --build -d
I have a docker with 2 containers for serving individual sites and jwilder/nginx-proxy in front of them.
However I'm only getting the default nginx page, so the vhosts aren't routing correctlty.
I've used https://www.rent-a-hero.de/wp/2017/06/09/use-j-wilders-nginx-proxy-for-multiple-docker-compose-projects/ as a guide t get myself started on this, because I wanted to set up a proxy network.
Each of the serving nxinx containers has a vhost file, and uses a 4th, shared php container (the code is the same in both nginx containers).
$ docker run -d \
--name nginx-proxy \
-p 80:80 \
--network=nginx-proxy \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
jwilder/nginx-proxy
/docker-compose.yml
version: '2'
services:
app-admin:
image: nginx:latest
container_name: app-admin
environment:
VIRTUAL_HOST: admin.app.local
VIRTUAL_PORT: 80
volumes:
- .:/var/www/html
- admin.app.conf:/etc/nginx/vhost.d/admin.app.local:ro
links:
- php
php:
image: php:7-fpm
container_name: php
volumes:
- .:/var/www/html
networks:
default:
external:
name: nginx-proxy
/admin.app.conf:
server {
server_name admin.app.local
listen 80;
listen [::]:80;
root /var/www/html;
index index.html index.php;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ /\.ht {
deny all;
}
}
app.conf is essentially the same for this testing stage... same result
Original files edited to help simplify the question (the removed files just basically setup the required nginx-proxy container and network, and I;ve stripped out the example second nginx container)
i want to create a LEMP stack with docker-compose for local web development but i've got in some troubles...
The configuration files i wrote:
docker-compose.yml
nginx:
image: nginx:latest
ports:
- "8080:80"
volumes:
- ./web:/web
- ./mysite.template:/etc/nginx/conf.d/mysite.template
links:
- php
command: /bin/bash -c "envsubst < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
php:
image: php:7-fpm
volumes:
- ./web:/web
mysite.template
server {
listen 80;
root /web;
index index.php index.html;
server_name default_server;
location ~ \.php$ {
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /web$fastcgi_script_name;
}
}
The problem is that when i try to open http://localhost:8080 the result is an Access Denied
How can i fix it?
EDIT:
This is the nginx container error log:
nginx_1 | 2017/11/01 13:21:25 [error] 8#8: *1 FastCGI sent in stderr: "Access to the script '/web' has been denied (see security.limit_extensions)" while reading response header from upstream
I was able to get your example working by changing your docker-compose.yml to:
nginx:
image: nginx:latest
ports:
- "8080:80"
volumes:
- ./web:/web
- ./mysite.template:/etc/nginx/conf.d/default.conf
links:
- php
php:
image: php:7-fpm
volumes:
- ./web:/web