I've been struggling to make changes to my docker app. After a lot of trial and error it looks what I thought was my nginx conf file might not actually be my nginx conf file.
I have determined this because I tried removing it entirely and my app runs the same with docker.
It looks like changes I make to my nginx service via the app.conf file have been having no impact on the rest of my app.
I am trying to understand if my volume mapping is correct. Here's my docker compose:
version: "3.5"
services:
collabora:
image: collabora/code
container_name: collabora
restart: always
cap_add:
- MKNOD
environment:
- "extra_params=--o:ssl.enable=false --o:ssl.termination=true"
- domain=mydomain\.com
- dictionaries=en_US
ports:
- "9980:9980"
volumes:
- ./appdata/collabora:/config
nginx:
image: nginx:1.15-alpine
restart: unless-stopped
depends_on:
- certbot
- collabora
volumes:
# - ./data/nginx:/etc/nginx/conf.d
- ./data/nginx:/etc/nginx
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
My projects directory is:
/my_project
docker-compose.yaml
data
nginx
app.conf
And then my app.conf has various nginx settings
server {
listen 80;
server_name example.com www.example.com;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
} ... more settings below
Assuming I'm correct that my app.conf file is not being used, how can I correctly map app.conf on local to the correct place in the nginx container?
nginx conf directory is
/etc/nginx/nginx.conf
This File has an include of all of contento from
/etc/nginx/conf.d/*
You can verify your nginx.conf in use executing a ps -ef | grep nginx on your container.
Verify your default.conf. Anyway, in your compose you must have:
volumes:
- ./data/nginx:/etc/nginx/conf.d
Try with absolute path
Regards
Related
I try to serve an application with that docker-compose file took from
https://github.com/solidnerd/docker-bookstack/blob/master/docker-compose.yml
version: '2'
services:
mysql:
image: mysql:5.7.33
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_DATABASE=bookstack
- MYSQL_USER=bookstack
- MYSQL_PASSWORD=secret
volumes:
- mysql-data:/var/lib/mysql
bookstack:
image: solidnerd/bookstack
depends_on:
- mysql
environment:
- DB_HOST=mysql:3306
- DB_DATABASE=bookstack
- DB_USERNAME=bookstack
- DB_PASSWORD=secret
volumes:
- uploads:/var/www/bookstack/public/uploads
- storage-uploads:/var/www/bookstack/storage/uploads
ports:
- "8080:8080"
# nginx:
# build: ./nginx
# depends_on:
# - bookstack
# ports:
# - "80:80"
volumes:
mysql-data:
uploads:
storage-uploads:
On Windows, if I go to localhost:8080, it works.
On Debian 10, if I try one of these commands :
curl "localhost:8080"
curl "172.17.0.1:8080"
It stucks and returns a 502 Proxy error after a while.
Here is the result of docker ps -a
45afeda13eab solidnerd/bookstack "/bin/docker-entrypo…" 27 seconds ago Up 12 seconds 80/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp app_bookstack_1
953a91c86e7d mysql "docker-entrypoint.s…" 31 seconds ago Up 9 seconds 3306/tcp app_db_1
If I uncomment my nginx configuration and use these 2 files :
events {
}
http {
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://bookstack:8080;
}
}
}
FROM nginx:1.20.1
COPY nginx.conf /etc/nginx/nginx.conf
Then it works. Why ?
I would like to use that without nginx. I thought it was a problem with that application, but I had the same problem with another application. Then I guess problem comes from my Debian.
Could you advice me something to reach localhost:8080 without nginx ?
I would like to pass global variables to my nginx app.conf via a app.conf.template file using docker and docker-compose.
When using an app.conf.template file with no commands in docker-compose.yaml my variables translate successfully and my redirects via nginx work as expected. But when I use a command in docker-compose my nginx and redirects fail.
My set up is per the instructions on the documentation, under section 'Using environment variables in nginx configuration (new in 1.19)':
Out-of-the-box, nginx doesn't support environment variables inside
most configuration blocks. But this image has a function, which will
extract environment variables before nginx starts.
Here is an example using docker-compose.yml:
web: image: nginx volumes:
./templates:/etc/nginx/templates ports:
"8080:80" environment:
NGINX_HOST=foobar.com
NGINX_PORT=80
By default, this function reads template files in
/etc/nginx/templates/*.template and outputs the result of executing
envsubst to /etc/nginx/conf.d
... more ...
My docker-compose.yaml works when it looks like this:
version: "3.5"
networks:
collabora:
services:
nginx:
image: nginx
depends_on:
- certbot
- collabora
volumes:
- ./data/nginx/templates:/etc/nginx/templates
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
env_file: .env
networks:
- collabora
On host I have a conf file ./data/nginx/templates/app.conf.template which contains a conf file with global variables throughout in the form ${variable_name}.
With this set up I'm able to run the container and my redirects work as expected. When I exec into the container I can cat /etc/nginx/conf.d/app.conf and see the file with the correct variables swapped in from the .env file.
But I need to add a command to my docker-compose.yaml:
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
When I add that command the set up fails and the global variables are not swapped into the app.conf file within the container.
On another forum it was suggested I move the command into it's own file in the container. I then gave this a try and created a shell script test.sh:
#/bin/sh
while :;
do sleep 6h & wait $${!};
nginx -s reload;
done;
My new docker-compose:
version: "3.5"
networks:
collabora:
services:
nginx:
image: nginx
depends_on:
- certbot
- collabora
volumes:
- ./data/nginx/templates:/etc/nginx/templates
- ./test.sh:/docker-entrypoint.d/test.sh # new - added test.sh into the container here
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
env_file: .env
networks:
- collabora
This fails. Although when I exec into the container and cat /etc/nginx/conf.d/app.conf I DO see the correct config, it just does not seem to be working in that my redirects, which otherwise do work when I don't include this test.sh script within /docker-entrypoint.d/.
I asked nearly same question yesterday and was given a working solution. However, it 'feels more correct' to add a shell script to the container at /docker-entrypoint.d/ and go that route instead like I've attempted in this post.
For what you're trying to do, I think the best solution is to create a sidecar container, like this:
version: "3.5"
networks:
collabora:
volumes:
shared_run:
services:
nginx:
image: nginx:1.19
volumes:
- "shared_run:/run"
- ./data/nginx/templates:/etc/nginx/templates
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
env_file: .env
networks:
- collabora
nginx_reloader:
image: nginx:1.19
pid: service:nginx
volumes:
- "shared_run:/run"
entrypoint:
- /bin/bash
- -c
command:
- |
while :; do
sleep 60
echo reloading
nginx -s reload
done
This lets you use the upstream nginx image without needing to muck about with its mechanics. The key here is that (a) we run the nginx_reloader container in the same PID namespace as the nginx container itself, and (b) we arrange for the two containers to share a /run directory so that the nginx -s reload command can find the pid of the nginx process in the expected location.
I have this dockerfile:
FROM nginx
COPY .docker/certificates/fullchain.pem /etc/letsencrypt/live/mydomain.com/fullchain.pem
COPY .docker/certificates/privkey.pem /etc/letsencrypt/live/mydomain.com/privkey.pem
COPY .docker/config/options-ssl-nginx.conf /etc/nginx/options-ssl-nginx.conf
COPY .docker/config/ssl-dhparams.pem /etc/nginx/ssl-dhparams.pem
COPY .docker/config/nginx.conf /etc/nginx/conf.d/default.conf
RUN chmod +r /etc/letsencrypt/live/mydomain.com/fullchain.pem
I have this in my nginx configuration:
server {
listen 443 ssl default_server;
server_name _;
# Why can't this file be found?
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
# ssl_certificate /etc/nginx/fullchain.pem;
# ssl_certificate_key /etc/nginx/privkey.pem;
include /etc/nginx/options-ssl-nginx.conf;
ssl_dhparam /etc/nginx/ssl-dhparams.pem;
...
}
Nginx crashes with:
[emerg] 7#7: cannot load certificate "/etc/letsencrypt/live/mydomain.com/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/mydomain.com/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
However, if I change the location of fullchain.pem and privkey.pem to, for example, /etc/nginx/fullchaim.pem and /etc/nginx/privkey.pem and update the nginx configuration, it does find the files and works as expected.
Here's the service definition in docker-compose.yml:
nginx-server:
container_name: "nginx-server"
build:
context: ../../
dockerfile: .docker/dockerfiles/NginxDockerfile
restart: on-failure
ports:
- "80:80"
- "443:443"
volumes:
- static-content:/home/docker/code/static
- letsencrypt-data:/etc/letsencrypt
- certbot-data:/var/www/certbot
depends_on:
- api
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
networks:
- api-network
- main
# Commented out to verify that the files aren't being deleted by certbot
# certbot:
# image: certbot/certbot
# container_name: "certbot"
# depends_on:
# - nginx-server
# restart: unless-stopped
# volumes:
# - letsencrypt-data:/etc/letsencrypt
# - certbot-data:/var/www/certbot
# entrypoint: "/bin/sh -c 'sleep 30s && trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
The intention is to use fullchain.pem as an initial certificate until one can be requested from let's encrypt. Note that, at this point, there is no certbot service, and the /etc/letsencrypt/live/mydomain.com directory is not referenced anywhere else at all (only in NginxDockerfile and nginx.conf), so it shouldn't be an issue of another service deleting the files. Rebuilding with --no-cache does not help.
Why can't nginx find the files in this specific location, but can find them if copied to a different location?
EDIT: As suggested, I ended up using a host volume instead. This didn't work when the host volume was located inside the repository (root_of_context/path/to/gitignored/directory/letsencrypt:/etc/letsencrypt, but did work with /etc/letsencrypt:/etc/letsencrypt, which I personally find ugly, but oh well.
Volumes are mounted on run, so after your container is built.
Since you mounted letsencrypt-data on /etc/letsencrypt, Nginx is going to look for your files into letsencrypt-data.
I don't know the purpose of this mount but I guess your container would succeed in running if you removed - letsencrypt-data:/etc/letsencrypt from volumes.
I am trying to find a way to publish nginx, express, and letsencrypt's ssl all together using docker-compose. There are many documents about this, so I referenced these and tried to make my own configuration, I succeed to configure nginx + ssl from this https://medium.com/#pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
So now I want to put sample nodejs express app into nginx + ssl docker-compose. But I don't know why, I get 502 Bad Gateway from nginx rather than express's initial page.
I am testing this app with my left domain, and on aws ec2 ubuntu16. I think there is no problem about domain dns and security rules settings. All of 80, 443, 3000 ports opened already. and When I tested it without express app it shows well nginx default page.
nginx conf in /etc/nginx/conf.d
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
server_tokens off;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
docker-compose.yml
version: '3'
services:
app:
container_name: express
restart: always
build: .
ports:
- '3000:3000'
nginx:
container_name: nginx
image: nginx:1.15-alpine
restart: unless-stopped
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
Dockerfile of express
FROM node:12.2-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
I think SSL works fine, but there are some problems between express app and nginx. How can I fix this?
proxy_pass http://localhost:3000
is proxying the request to the 3000 port on the container that is running nginx. What you instead want is to connect to the 3000 port of the container running express. For that, we need to do two things.
First, we make the express container visible to nginx container at a predefined hostname. We can use links in docker-compose.
nginx:
links:
- "app:expressapp"
Alternatively, since links are now considered a legacy feature, a better way is to use a user defined network. Define a network of your own with
docker network create my-network
and then connect your containers to that network in compose file by adding the following at the top level:
networks:
default:
external:
name: my-network
All the services connected to a user defined network can access each other via name without explicitly setting up links.
Then in the nginx.conf, we proxy to the express container using that hostname:
location / {
proxy_pass http://app:3000
}
Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link.
Define networks in your docker-compose.yml and configure your services with the appropriate network:
version: '3'
services:
app:
restart: always
build: .
networks:
- backend
expose:
- "3000"
nginx:
image: nginx:1.15-alpine
restart: unless-stopped
depends_on:
- app
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
networks:
- frontend
- backend
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
networks:
frontend:
backend:
Note: the app service no longer publish's it's ports to the host it only exposes port 3000 (ref. exposing and publishing ports), it is only available to services connected to the backend network. The nginx service has a foot in both the backend and frontend network to accept incoming traffic from the frontend and proxy the connections to the app in the backend (ref. multi-host networking).
With user-defined networks you can resolve the service name:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream app {
server app:3000 max_fails=3;
}
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
server_tokens off;
location / {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
}
Removing the container_name from your services makes it possible to scale the services: docker-compose up -d --scale nginx=1 app=3 - nginx will load balance the traffic in round-robin to the 3 app containers.
I think maybe a source of confusion here is the way the "localhost" designation behaves among running services in docker-compose. The way docker-compose orchestrates your containers, each of the containers understands itself to be "localhost", so "localhost" does not refer to the host machine (and if I'm not mistaken, there is no way for a container running on the host to access a service exposed on a host port, apart from maybe some security exploits). To demonstrate:
services:
app:
container_name: express
restart: always
build: .
ports:
- '2999:3000' # expose app's port on host's 2999
Rebuild
docker-compose build
docker-compose up
Tell container running the express app to curl against its own running service on port 3000:
$ docker-compose exec app /bin/bash -c "curl http://localhost:3000"
<!DOCTYPE html>
<html>
<head>
<title>Express</title>
<link rel='stylesheet' href='/stylesheets/style.css' />
</head>
<body>
<h1>Express</h1>
<p>Welcome to Express</p>
</body>
</html>
Tell app to try to that same service which we exposed on port 2999 on the host machine:
$ docker-compose exec app /bin/bash -c "curl http://localhost:2999"
curl: (7) Failed to connect to localhost port 2999: Connection refused
We will of course see this same behavior between running containers as well, so in your setup nginx was trying to proxy it's own service running on localhost:3000 (but there wasn't one, as you know).
Tasks
build NodeJS app
add SSL functionality from the box (that can work automatically)
Solution
https://github.com/evertramos/docker-compose-letsencrypt-nginx-proxy-companion
/ {path_to_the_project} /Docker-compose.yml
version: '3.7'
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
restart: always
container_name: nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
- ./vhost.d:/etc/nginx/vhost.d
- ./html:/usr/share/nginx/html
- ./conf.d:/etc/nginx/conf.d
ports:
- "443:443"
- "80:80"
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: letsencrypt
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./certs:/etc/nginx/certs:rw
- ./vhost.d:/etc/nginx/vhost.d:rw
- ./html:/usr/share/nginx/html:rw
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
api:
container_name: ${APP_NAME}
build:
context: .
dockerfile: Dockerfile
command: npm start --port ${APP_PORT}
expose:
- ${APP_PORT}
# ports:
# - ${APP_PORT}:${APP_PORT}
restart: always
environment:
VIRTUAL_PORT: ${APP_PORT}
VIRTUAL_HOST: ${DOMAIN}
LETSENCRYPT_HOST: ${DOMAIN}
LETSENCRYPT_EMAIL: ${LETSENCRYPT_EMAIL}
NODE_ENV: production
PORT: ${APP_PORT}
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
/ {path_to_the_project} /.env
APP_NAME=best_api
APP_PORT=3000
DOMAIN=api.site.com
LETSENCRYPT_EMAIL=myemail#gmail.com
Do not forget to connect DOMAIN to you server before you will run container there.
How it works?
just run docker-compose up --build -d
I configured my django-uwsgi-nginx using docker compose with the following files.
From browser "http://127.0.0.1:8000/" works fine and gives me the django default page
From browser "http://127.0.0.1:80" throws a 502 Bad Gateway
dravoka-docker.conf
upstream web {
server 0.0.0.0:8000;
}
server {
listen 80;
server_name web;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
alias "/dravoka-static/";
}
location / {
include uwsgi_params;
proxy_pass http://web;
}
}
nginx/Dockerfile
FROM nginx:latest
RUN echo "---------------------- I AM NGINX --------------------------"
RUN rm /etc/nginx/conf.d/default.conf
ADD sites-enabled/ /etc/nginx/conf.d
RUN nginx -t
web is just from "django-admin startproject web"
docker-compose.yaml
version: '3'
services:
nginx:
restart: always
build: ./nginx/
depends_on:
- web
ports:
- "80:80"
web:
build: .
image: dravoka-image
ports:
- "8000:8000"
volumes:
- .:/dravoka
command: uwsgi /dravoka/web/dravoka.ini
Dockerfile
# Ubuntu base image
FROM ubuntu:latest
# Some installs........
EXPOSE 80
When you say from the docker instance , you are running curl from with in the container ?? or you are running the curl command from your local ?
if you are running it from your local , update your docker-compose's web service to following
...
web:
build: .
image: dravoka-image
expose:
- "8000:8000"
volumes:
- .:/dravoka
command: uwsgi /dravoka/web/dravoka.ini
and try again.