I am trying to setup an NGINX server and mount a directory to the container. I have a DigitalOcean server running, and want to link my website data into the nginx container.
Part of the docker file is:
webserver:
depends_on:
- wordpress
image: nginx:latest
container_name: webserver
restart: unless-stopped
# Expose port 80 to enable the config options defined nginx.conf
ports:
- "80:80"
- "443:443"
# combiation of named volumes and bind mounts
# bind wordpress app code
# bind nginx config dir on host
# mount certbot certificates and keys for domain
volumes:
- wp_data:/var/www/html
- ~/custom:/etc/nginx/conf.d/custom
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
networks:
- kgNetwork
command: /bin/bash -c "envsubst < /etc/nginx/conf.d/custom > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
nginx.conf
server {
# Tells ngxinx to listen on port 80
listen 80;
listen [::]:80;
root /usr/share/nginx/html;
index index.html index.htm;
location /
{
try_files $uri $uri/ =404;
}
}
the container starts but I don't see my linked test html file. the log shows: envsubst: error while reading "standard input": Is a directory
I am not quite sure how to understand this. On my server I created in my home folder the subfolder "custom" containing an index.html.
My thought process so far was:
Create a custom html in the host folder
Mount the volume via ~/custom:/etc/nginx/conf.d/custom
Run the command: /bin/bash -c "envsubst < /etc/nginx/conf.d/custom > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"to setup the custom website on nginx.
The nginx container is running but loading the site does not show anything. I am new to docker and tried to debug for 2hrs now but I am clearly missing the point :)
Thanks
Sebastian
You say you created $HOME/custom/index.html. When you launch the container, you do it with a bind mount ~/custom:/etc/nginx/conf.d/custom; that mounts the directory $HOME/custom into the nginx configuration directory. When you then try to run envsubst, its input is the custom directory, which leads to the error you get.
If that directory actually contains the HTML files, you need to mount it on the location you specified in your configuration
volumes:
- ~/custom:/usr/share/nginx/html
If you're trying to do environment-variable substitution on the HTML content at deploy time, you also need to change the path in the envsubst command. Consider using a Docker entrypoint wrapper script to do this templating. Also remember that writes into the mounted directory after container startup are bidirectional, so with this setup you might have trouble running multiple containers off the same (shared) host content.
Related
I have tried reading through the other stackoverflow questions here but I am either missing something or none of them are working for me.
Context
I have two docker containers setup on a DigitalOcean server running Ubuntu.
root_frontend_1 running on ports 0.0.0.0:3000->3000/tcp
root_nginxcustom_1 running on ports 0.0.0.0:80->80/tcp
If I connect to http://127.0.0.1, I get the default Nginx index.html homepage. If I http://127.0.0.1:3000 I am getting my react app.
What I am trying to accomplish is to get my react app when I visit http://127.0.0.1. Following the documentation and suggestions here on StackOverflow, I have the following:
docker-compose.yml in root of my DigitalOcean server.
version: "3"
services:
nginxcustom:
image: nginx
hostname: nginxcustom
ports:
- "80:80"
volumes:
- ./nginx.conf:/root/nginxcustom/conf/custom.conf
tty: true
backend:
build: https://github.com/Twitter-Clone/twitter-clone-api.git
ports:
- "8000:8000"
tty: true
frontend:
build: https://github.com/dougmellon/react-api.git
ports:
- "3000:3000"
stdin_open: true
tty: true
nginxcustom/conf/custom.conf :
server {
listen 80;
server_name http://127.0.0.1;
location / {
proxy_pass http://root_frontend_1:3000; # this one here
proxy_redirect off;
}
}
When I run docker-compose up, it builds but when I visit the ip of my server, it's still showing the default nginx html file.
Question
What am I doing wrong here and how can I get it so the main URL points to my react container?
Thank you for your time, and if there is anything I can add for clarity, please don't hesitate to ask.
TL;DR;
The nginx service should proxy_pass to the service name (customnginx), not the container name (root_frontend_1) and the nginx config should be mounted to the correct location inside the container.
Tip: the container name can be set in the docker-compose.yml for services setting the container_name however beware you can not --scale services with a fixed container_name.
Tip: the container name (root_frontend_1) is generated using the compose project name which defaults to using the current directory name if not set.
Tip: the nginx images are packaged with a default /etc/nginx/nginx.conf that will include the default server config from /etc/nginx/conf.d/default.conf. You can docker cp the default configuration files out of a container if you'd like to inspect them or use them as a base for your own configuration:
docker create --name nginx nginx
docker cp nginx:/etc/nginx/conf.d/default.conf default.conf
docker cp nginx:/etc/nginx/nginx.conf nginx.conf
docker container rm nginx
With nginx proxying connections for the frontend service we don't need to bind the hosts port to the container, the services ports definition can be replaced with an expose definition to prevent direct connections to http://159.89.135.61:3000 (depending on the backend you might want prevent direct connections as well):
version: "3"
services:
...
frontend:
build: https://github.com/dougmellon/react-api.git
expose:
- "3000"
stdin_open: true
tty: true
Taking it a step further we can configure an upstream for the
frontend service then configure the proxy_pass for the upstream:
upstream frontend {
server frontend:3000 max_fails=3;
}
server {
listen 80;
server_name http://159.89.135.61;
location / {
proxy_pass http://frontend/;
}
}
... then bind-mount the custom default.conf on top of the default.conf inside the container:
version: "3"
services:
nginxcustom:
image: nginx
hostname: nginxcustom
ports:
- "80:80"
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
tty: true
... and finally --scale our frontend service (bounce the services removing the containers to make sure changes to the config take effect):
docker-compose stop nginxcustom \
&& docker-compose rm -f \
&& docker-compose up -d --scale frontend=3
docker will resolve the service name to the IP's of the 3 frontend containers which nginx will proxy the connections for in a (by default) round robin manner.
Tip: you can not --scale a service that has ports mappings, only a single container can bind to the port.
Tip: if you've updated the config and can connect to your load balanced service then you're all set to create a DNS record to resolve a hostname to your public IP address then update your default.conf's server_name.
Tip: for security I maintain specs for building a nginx docker image with Modsecurity and Modsecurity-nginx pre-baked with the OWASP Core Rule Set.
In Docker when multiple services needs to communicate with each other, you can use the service name in the url (set in the docker-composer.yml instead of the ip (which is attributed from the available pool of the network, default by default), it will automatically be resolve to the right container ip due to network management by docker.
For you it would be http://frontend:3000
I have developed a small project using flask/tensorflow. It runs under app server - gunicorn.
I have to also include nginx into the project for serving static files. Without docker app is running fine. All parts(gunicorn, nginx, flask) cooperate as intended. It's now time to move this project to an online server, and i need to do it via docker.
Nginx and gunicorn->flask app communicate via unix socket. In my localhost environment i used socket inside app root folder - myapp/app.sock, it all worked great.
Problem now is that i can't quite understand how do i tell nginx inside docker to use same socket file and tell gunicorn to listen to it. I get the following error:
upstream: http:// unix:/var/run/app.sock failed (No such file or directory) while connecting to upstream
Tried using different paths to socket file, but no luck - same error.
docker-compose file:
version: '3'
services:
nginx:
image: nginx:latest
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/remote-app:/etc/nginx/sites-enabled/remote-app
- /etc/nginx/proxy_params:/etc/nginx/proxy_params
ports:
- 8000:8000
build: .
command: gunicorn --bind unix:/var/run/app.sock wsgi:app --reload --workers 1 --timeout 60
environment:
- FLASK_APP=prediction_service.py
- FLASK_DEBUG=1
- PYTHONUNBUFFERED=True
restart: on-failure
main Dockerfile (for main app, it builds app fine, all is working):
FROM python:3.8-slim
RUN pip install flask gunicorn flask_wtf boto3 tqdm
RUN pip install numpy==1.18.5
RUN pip install tensorflow==2.2.0 onnxruntime==1.4.0
COPY *.ipynb /temp/
COPY *.hdf5 /temp/
COPY *.onnx /temp/
COPY *.json /temp/
COPY *.py /temp/
WORKDIR /temp
nginx.conf is 99% same as default with only increased file size for uploading to 8M
proxy-params is just a preset of configurtion params for making nginx proxy requests
and remote-app is a config for my app(simple one):
server {
listen 8000;
server_name localhost;
location / {
include proxy_params;
proxy_pass htpp://unix:/var/run/app.sock; //**tried /temp/app.sock here same issue**
}
}
So if i open localhost(even without port 8000) i can get nginx answer. If i try to open localhost:8000 i get that socket error( that is pasted above with strong text ).
I would avoid using sockets for this, as there is IP communication between containers/services, and you really should have a separate service for the app server.
Something like:
version: '3'
services:
nginx:
image: nginx:latest
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/remote-app:/etc/nginx/sites-enabled/remote-app
- /etc/nginx/proxy_params:/etc/nginx/proxy_params
ports:
- 80:80
- 143:143
app_server:
build: .
command: gunicorn --bind '0.0.0.0:5000' wsgi:app --reload --workers 1 --timeout 60
environment:
- FLASK_APP=prediction_service.py
- FLASK_DEBUG=1
- PYTHONUNBUFFERED=True
restart: on-failure
Notice instead of binding gunicorn to the socket, it is bound to all IP interfaces of the app_server container on port 5000.
With the separate service app_server alongside your current nginx service, you can simply treat these values like DNS aliases in each container. So in the nginx config:
proxy_pass http://app_server:5000/
So if i open localhost(even without port 8000) i can get nginx answer.
That sounds like you mean connecting to localhost on port 80 which could be a nginx server running on the host machine. This is also suggested by this line in your compose file: /etc/nginx/proxy_params:/etc/nginx/proxy_params.
That's loading the file from a local installation of nginx on the host. You should probably be aware of this, as having that server running also could confuse you when debugging, and launching this compose file somewhere would mean /etc/nginx/proxy_params has to be present on the host.
You should probably store this in the project directory, like the other files which are mounted, and mount it like:
- ./nginx/proxy_params:/etc/nginx/proxy_params
I'm trying to follow this guide to setting up a reverse proxy for a docker container (serving a static file), using another container with an instance of nginx as a reverse proxy.
I expect to see my page served on /, but I am blocked in the build with the error message:
container_nginx_1 | 2020/05/10 16:54:12 [emerg] 1#1: host not found in upstream "container1:8001" in /etc/nginx/conf.d/sites-enabled/virtual.conf:2
container_nginx_1 | nginx: [emerg] host not found in upstream "container1:8001" in /etc/nginx/conf.d/sites-enabled/virtual.conf:2
nginx_docker_test_container_nginx_1 exited with code 1
I have tried many variations on the following virtual.conf file, and this is the current, based on the example given and various other pages:
upstream cont {
server container1:8001;
}
server {
listen 80;
location / {
proxy_pass http://cont/;
}
}
If you are willing to look at a 3rd party site, I've made a minimal repo here, otherwise the most relevant files are below.
My docker-compose file looks like this:
version: '3'
services:
container1:
hostname: container1
restart: always
image: danjellz/http-server
ports:
- "8001:8001"
volumes:
- ./proj1:/public
command: "http-server . -p 8001"
depends_on:
- container_nginx
networks:
- app-network
container_nginx:
build:
context: .
dockerfile: docker/Dockerfile_nginx
ports:
- 8080:8080
networks:
- app-network
networks:
app-network:
driver: bridge
and the Dockerfile
# docker/Dockerfile_nginx
FROM nginx:latest
# add nginx config files to sites-available & sites-enabled
RUN mkdir /etc/nginx/conf.d/sites-available
RUN mkdir /etc/nginx/conf.d/sites-enabled
ADD projnginx/conf.d/sites-available/virtual.conf /etc/nginx/conf.d/sites-available/virtual.conf
RUN cp /etc/nginx/conf.d/sites-available/virtual.conf /etc/nginx/conf.d/sites-enabled/virtual.conf
# Replace the standard nginx conf
RUN sed -i 's|include /etc/nginx/conf.d/\*.conf;|include /etc/nginx/conf.d/sites-enabled/*.conf;|' /etc/nginx/nginx.conf
WORKDIR /
I'm running this using docker-compose up.
Similar: react - docker host not found in upstream
The problem is if the hostname can not be resolved in upstream blocks, nginx will not start. Here you have defined service container1 to be dependent on container_nginx . But nginx container is never up due to the fact the container1 hostname is not resolved (because container1 is not yet started) Don't you think it should be reverse? Nginx container should be dependent on the app container.
Additionally in your nginx port binding you have mapped 8080:8080 while in nginx conf you have 80 listening.
I start a nginx reverse proxy in docker-compose.
The first docker compose file looks like this:
version: "3.5"
services:
rproxy:
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 80:80
- 443:443
volumes:
- '/etc/letsencrypt:/etc/letsencrypt'
networks:
- main
networks:
main:
name: main_network
The dockerfile just makes sure the nginx server has the following configuration:
server {
listen 443 ssl;
server_name website.dev;
ssl_certificate /etc/letsencrypt/live/www.website.dev/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.website.dev/privkey.pem;
location / {
resolver 127.0.0.11;
set $frontend http://website;
proxy_pass $frontend;
}
}
First I run this following docker-compose file. Then when I try to access www.website.dev i get a 502 error as expected.
Then I run this other docker-compose file defined below:
version: '3.5'
services:
website:
image: registry.website.dev/frontendcontainer:latest
command: npm run deploy
networks:
main:
aliases:
- website
networks:
main:
external:
name: main_network
This should start the website container on the same network as the nginx container.
"docker ps" shows that the docker container is running.
going to website.dev gives a 502 error. This is unexpected. I expect Nginx to now be able to connect to the now running docker container.
I reset the nginx server by running the following on the first docker-compose file:
docker-compose up -d
Going to website.dev now displays the contents of the website container.
I make changes to the website container upload the new docker container to the private container.
I use the following commands on the second docker-compose file:
docker-compose down
The old website container is no longer in existence.
docker-compose pull
The new website container is pulled.
docker-compose up
The new website container is now online.
Going to website.dev now displays the contents of the old (confirmed to be non-existent) container instead of the new container. This is unexpected
Reseting the nginx server will cause it to now deliver the correct website.
My question is, How do I configure nginx to just deliver whatever it finds at the configured url without having to reset the nginx server?
dockerfile as requested:
FROM nginx:alpine
RUN rm /etc/nginx/conf.d/*
COPY proxy.conf /etc/nginx/conf.d
we got all the parameter.
Using angular-cli and compile the code with
ng-build, this result in a static files, you don't need to serve
them with proxy path. You only need to set the location to the folder
with index.html and everything will work alone without http-server
NGinx:
server {
listen 443 ssl default_server;
server_name website.dev _ default_server;
ssl_certificate /etc/letsencrypt/live/www.website.dev/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.website.dev/privkey.pem;
location / {
root /path/to/dist/website; # where ng build put index.html with all .js and assets
index index.htm index.html;
}
}
docker-compose NGinx:
version: "3.5"
services:
rproxy:
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 80:80
- 443:443
volumes:
- '/etc/letsencrypt:/etc/letsencrypt'
- '/host/path/shared:/path/to' # <-- Add this line. (host: /host/path/shared)
networks:
- main
networks:
main:
name: main_network
docker-compose website:
version: '3.5'
services:
website:
image: registry.website.dev/frontendcontainer:latest
command: npm run deploy
volumes:
- '/host/path/shared:/path/to' # <-- Add this line. (host: /host/path/shared)
networks:
main:
aliases:
- website
networks:
main:
external:
name: main_network
Now ng build --prod will create index.html and assets into
/host/path/shared/dist/website (internally: /path/to/dist/website).
Then NGinx will have access to those file internally on
/path/to/dist/website, without using http-server. Angular is
frontend client it don't need to be start in production mode
i'm trying to make a reverse proxy and dockerize it for my flask application with nginx, gunicorn, docker and docker-compose . Before that the nginx part was in the same container than the web app, i'm trying to separe it.
My docker_compose yaml file is :
version: '3.6'
services:
nginx:
restart: always
build: ./nginx/
ports:
- 8008:8008
networks:
- web_net
flask_min:
build: .
image: flask_min
container_name: flask_min
expose:
- "8008"
networks:
- web_net
depends_on:
- nginx
networks:
web_net:
driver: bridge
My dockerfile is :
FROM python:3.6
MAINTAINER aurelien beliard (email#domain.com)
RUN apt update
COPY . /usr/flask_min
WORKDIR /usr/flask_min
RUN useradd -r -u 20979 -ms /bin/bash aurelien.beliard
RUN pip3 install -r requirements.txt
CMD gunicorn -w 3 -b :8008 app:app
my nginx docker file is
FROM nginx
COPY ./flask_min /etc/nginx/sites-available/
RUN mkdir /etc/nginx/sites-enabled
RUN ln -s /etc/nginx/sites-available/flask_min /etc/nginx/sites-enabled/flask_min
my nginx config file in /etc/nginx sites-available and sites-enabled is named flask-min :
server {
listen 8008;
server_name http://192.168.16.241/ ;
charset utf-8;
location / {
proxy_pass http://flask_min:8008;
} }
the requirements.txt file is :
Flask==0.12.2
grequests==0.3.0
gunicorn==19.7.1
Jinja2==2.10
The 2 containers are well created, gunicorn start well but i can't access to the application and there is nothing in the nginx access and error log .
If you have any idea it will be very appreciated.
ps sorry for the fault english is not my native language.
As mentioned in Maxm's answer, flask is depending on nginx to startup first. One way to fix it is to reverse the dependency order, but I think there's a more clever solution that doesn't require the dependency.
Nginx tries to do some optimization by caching the dns results of proxy_pass, but you can make it more flexible by setting it to a variable. This allows you to freely restart flask without having to also restart nginx.
Here's an example:
resolver 127.0.0.11 ipv6=off;
set $upstream http://flask_min:8008;
proxy_pass $upstream;
server_name should just be the host. try localhost or just _.
you can also do multiple hosts: server_name 192.168.16.241 localhost;
The depends_on should be on nginx not flask_min. Remove it from flask and add:
depends_on:
- flask_min
To nginx.
See if that works, let me know if you run into any more snags.