nginx serves files only on port 80 - docker

I have dockerizd django app with gunicorn and nginx. The app itself works at http://127.0.0.1:8000 but without static/media files, error:
172.24.0.1 - - [08/May/2019:13:25:50 +0000] "GET /static/js/master.js HTTP/1.1" 404 77 "http://127.0.0.1:8000/"
If I try to access files on port 80, they are served just fine.
Dockerfile:
FROM python:3.6-alpine
RUN apk --update add \
build-base \
postgresql \
postgresql-dev \
libpq \
# pillow dependencies
jpeg-dev \
zlib-dev
RUN mkdir /www
WORKDIR /www
COPY requirements.txt /www/
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED 1
COPY . /www/
docker-compose.yml
version: "3"
services:
web:
build: .
restart: on-failure
volumes:
- .:/www
env_file:
- ./.env
command: >
sh -c "python manage.py collectstatic --noinput &&
gunicorn --bind 0.0.0.0:8000 portfolio.wsgi:application --access-logfile '-'"
expose:
- "8000"
ports:
- "8000:8000"
nginx:
image: "nginx"
restart: always
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d
- ./static:/var/www/portfolio/static
- ./media:/var/www/portfolio/media
links:
- web
ports:
- "80:80"
nginx.conf
server {
listen 80;
server_name 127.0.0.1;
# serve static files
location /static/ {
root /var/www/portfolio;
}
# serve media files
location /media/ {
root /var/www/portfolio;
}
# pass requests for dynamic content to gunicorn
location / {
pproxy_pass http://web:8000;
proxy_set_header Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
What I want is static and media files to load with my web app on 127.0.0.1. It seems to me that there might be a problem with proxy_pass, but I can't figure it out.
Any ideas?

This seems to be the culprit: proxy_pass http://127.0.0.1:8000;
This line makes Nginx look for a service on port 8000 inside the Nginx container. localhost / 127.0.0.1 inside a container always means "the container itself" and not the Docker host.
You are running both services in the same Docker network, so this should work for you:
proxy_pass http://web:8000;

I see you are running two containers and nginx could not connect to python container as the ip address you gave is bound to inside the container. you might need to add extra_hosts: in docker-compose to nginx part at which it will be able to connect to other container.

If you set nice logging in nginx you will realize that it is not running on 127.0.0.1 since its a compose service. So you need to check out on which IP your compose network runs and that is where you find nginx.

Related

nginx reverse proxy with docker-compose config to serve traffic to multiple domains

I am trying to use nginx with docker-compose to route traffic for two different apps with different domain names. I want to be able to go to publisher.dev but I can only access that app from localhost:3000 (this is a react app) and I have another app which I want to access from widget.dev but I can only access from localhost:8080 (this is a Preact app). This is my folder structure and configs:
|-docker-compose.yml
|-nginx
|--default.conf
|--Dockerfile.dev
|-publisher
|--// react app
|--Dockerfile.dev
|-widget
|--// preact app (widget)
|--Dockerfile.dev
# default.conf
upstream publisher {
server localhost:3000;
}
upstream widget {
server localhost:8080;
}
server {
listen 80;
server_name publisher.dev;
location / {
proxy_pass http://publisher/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
server {
listen 80;
server_name widget.dev;
location / {
proxy_pass http://widget/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
nginx Dockerfile.dev
FROM nginx:stable-alpine
COPY ./default.conf /etc/nginx/conf.d/default.conf
publisher Dockerfile.dev (same as widget Dockerfile.dev)
# Specify the base image
FROM node:16-alpine
# Specify the working directory inside the container
WORKDIR /app
# copy the package json from your local hard drive to the container
COPY ./package.json ./
# install dependencies
RUN npm install
# copy files from local hard drive into container
# by copying the package.json and running npm install before copy files,
# this insures that a change to a file does not cause a re-run of npm-install
COPY ./ ./
# command to run when the container starts up
CMD ["npm", "run", "start"]
# build this docker container with:
# docker build -f Dockerfile.dev .
# run this container with:
# docker run <container id>
docker-compose.yml
version: '3'
services:
nginx:
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- 3050:80
restart: always
depends_on:
- publisher
- widget
publisher:
stdin_open: true
build:
dockerfile: Dockerfile.dev
context: ./publisher
volumes:
- /app/node_modules
- ./publisher:/app
ports:
- 3000:3000
environment:
VIRTUAL_HOST: publisher.dev
widget:
stdin_open: true
build:
dockerfile: Dockerfile.dev
context: ./widget
volumes:
- /app/node_modules
- ./widget:/app
ports:
- 8080:8080
environment:
VIRTUAL_HOST: widget.dev
hosts file
127.0.0.1 publisher.dev
127.0.0.1 widget.dev
why is your upstream trying to connect with
publisher and widget, shouldn't they connect to localhost:3000 and localhost:8080, let upstream server name be publisher and widget but connect them to localhost.
upstream publisher {
#server publisher:3000;
server localhost:3000;
}

How to correctly configure docker-compose and nginx for multiple Dash/Flask apps?

I am trying to serve several Dash/Flask apps using docker-compose and nginx. Currently my set-up looks like this:
The Dash app is using host 0.0.0.0 and port 8050:
if __name__ == '__main__':
app.run_server(host='0.0.0.0',debug=True, port=8050)
In the Dockerfile of the app port 8050 is exposed:
FROM python:3.9
# Copy function code
COPY lp_scr_design_app.py /
COPY assets/ assets/
COPY data/ data/
# Install the function's dependencies using file requirements.txt
# from your project folder.
COPY requirements.txt ./
RUN pip install --trusted-host files.pythonhosted.org --trusted-host pypi.org --trusted-host pypi.python.org -r requirements.txt
EXPOSE 8050
# Start app
CMD ["python", "lp_scr_design_app.py"]
Then nginx is configured such that it passes this app through for the location /:
server {
listen 80;
server_name docker_flask_gunicorn_nginx;
location / {
proxy_pass http://lp_scr_design_app:8050;
# Do not change this
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static {
rewrite ^/static(.*) /$1 break;
root /static;
}
}
with a Dockerfile like this:
FROM nginx:1.15.8
RUN rm /etc/nginx/nginx.conf
COPY nginx.conf /etc/nginx/
RUN rm /etc/nginx/conf.d/default.conf
COPY project.conf /etc/nginx/conf.d/
Finally in the docker-compose both apps are orchestrated like this:
version: '3'
services:
lp_scr_design_app:
container_name: lp_scr_design_app
restart: always
build: ./lp_scr_design_app
ports:
- "8050:8050"
command: gunicorn -w 1 -b :8050 app:server
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- "80:80"
depends_on:
- lp_scr_design_app
I can now build and run docker-compose successfully without any issues. However if I try to open the root directory / in a browser I get (after a while) a 502 Bad Gateway from nginx.
Where did I go wrong with my set-up here?

Docker compose of nginx, express, letsencrypt SSL get 502 Bad gateway

I am trying to find a way to publish nginx, express, and letsencrypt's ssl all together using docker-compose. There are many documents about this, so I referenced these and tried to make my own configuration, I succeed to configure nginx + ssl from this https://medium.com/#pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
So now I want to put sample nodejs express app into nginx + ssl docker-compose. But I don't know why, I get 502 Bad Gateway from nginx rather than express's initial page.
I am testing this app with my left domain, and on aws ec2 ubuntu16. I think there is no problem about domain dns and security rules settings. All of 80, 443, 3000 ports opened already. and When I tested it without express app it shows well nginx default page.
nginx conf in /etc/nginx/conf.d
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
server_tokens off;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
docker-compose.yml
version: '3'
services:
app:
container_name: express
restart: always
build: .
ports:
- '3000:3000'
nginx:
container_name: nginx
image: nginx:1.15-alpine
restart: unless-stopped
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
Dockerfile of express
FROM node:12.2-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
I think SSL works fine, but there are some problems between express app and nginx. How can I fix this?
proxy_pass http://localhost:3000
is proxying the request to the 3000 port on the container that is running nginx. What you instead want is to connect to the 3000 port of the container running express. For that, we need to do two things.
First, we make the express container visible to nginx container at a predefined hostname. We can use links in docker-compose.
nginx:
links:
- "app:expressapp"
Alternatively, since links are now considered a legacy feature, a better way is to use a user defined network. Define a network of your own with
docker network create my-network
and then connect your containers to that network in compose file by adding the following at the top level:
networks:
default:
external:
name: my-network
All the services connected to a user defined network can access each other via name without explicitly setting up links.
Then in the nginx.conf, we proxy to the express container using that hostname:
location / {
proxy_pass http://app:3000
}
Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link.
Define networks in your docker-compose.yml and configure your services with the appropriate network:
version: '3'
services:
app:
restart: always
build: .
networks:
- backend
expose:
- "3000"
nginx:
image: nginx:1.15-alpine
restart: unless-stopped
depends_on:
- app
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
networks:
- frontend
- backend
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
networks:
frontend:
backend:
Note: the app service no longer publish's it's ports to the host it only exposes port 3000 (ref. exposing and publishing ports), it is only available to services connected to the backend network. The nginx service has a foot in both the backend and frontend network to accept incoming traffic from the frontend and proxy the connections to the app in the backend (ref. multi-host networking).
With user-defined networks you can resolve the service name:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream app {
server app:3000 max_fails=3;
}
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
server_tokens off;
location / {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
}
Removing the container_name from your services makes it possible to scale the services: docker-compose up -d --scale nginx=1 app=3 - nginx will load balance the traffic in round-robin to the 3 app containers.
I think maybe a source of confusion here is the way the "localhost" designation behaves among running services in docker-compose. The way docker-compose orchestrates your containers, each of the containers understands itself to be "localhost", so "localhost" does not refer to the host machine (and if I'm not mistaken, there is no way for a container running on the host to access a service exposed on a host port, apart from maybe some security exploits). To demonstrate:
services:
app:
container_name: express
restart: always
build: .
ports:
- '2999:3000' # expose app's port on host's 2999
Rebuild
docker-compose build
docker-compose up
Tell container running the express app to curl against its own running service on port 3000:
$ docker-compose exec app /bin/bash -c "curl http://localhost:3000"
<!DOCTYPE html>
<html>
<head>
<title>Express</title>
<link rel='stylesheet' href='/stylesheets/style.css' />
</head>
<body>
<h1>Express</h1>
<p>Welcome to Express</p>
</body>
</html>
Tell app to try to that same service which we exposed on port 2999 on the host machine:
$ docker-compose exec app /bin/bash -c "curl http://localhost:2999"
curl: (7) Failed to connect to localhost port 2999: Connection refused
We will of course see this same behavior between running containers as well, so in your setup nginx was trying to proxy it's own service running on localhost:3000 (but there wasn't one, as you know).
Tasks
build NodeJS app
add SSL functionality from the box (that can work automatically)
Solution
https://github.com/evertramos/docker-compose-letsencrypt-nginx-proxy-companion
/ {path_to_the_project} /Docker-compose.yml
version: '3.7'
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
restart: always
container_name: nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
- ./vhost.d:/etc/nginx/vhost.d
- ./html:/usr/share/nginx/html
- ./conf.d:/etc/nginx/conf.d
ports:
- "443:443"
- "80:80"
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: letsencrypt
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./certs:/etc/nginx/certs:rw
- ./vhost.d:/etc/nginx/vhost.d:rw
- ./html:/usr/share/nginx/html:rw
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
api:
container_name: ${APP_NAME}
build:
context: .
dockerfile: Dockerfile
command: npm start --port ${APP_PORT}
expose:
- ${APP_PORT}
# ports:
# - ${APP_PORT}:${APP_PORT}
restart: always
environment:
VIRTUAL_PORT: ${APP_PORT}
VIRTUAL_HOST: ${DOMAIN}
LETSENCRYPT_HOST: ${DOMAIN}
LETSENCRYPT_EMAIL: ${LETSENCRYPT_EMAIL}
NODE_ENV: production
PORT: ${APP_PORT}
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
/ {path_to_the_project} /.env
APP_NAME=best_api
APP_PORT=3000
DOMAIN=api.site.com
LETSENCRYPT_EMAIL=myemail#gmail.com
Do not forget to connect DOMAIN to you server before you will run container there.
How it works?
just run docker-compose up --build -d

Nginx deployed in docker container doesn't expose nuxtjs deployed in another docker container (502 Bad Gateway)

I'm trying to run nuxtjs application using nginx as proxy server in docker containers. So, I have 2 containers: nginx and nuxt
here is how I'm building nuxt application
FROM node:11.15
ENV APP_ROOT /src
RUN mkdir ${APP_ROOT}
WORKDIR ${APP_ROOT}
ADD . ${APP_ROOT}
RUN npm install
RUN npm run build
ENV host 0.0.0.0
The result seems to be fine
Next is nginx config
server {
listen 80;
server_name dev.iceik.com.ua;
location / {
proxy_pass http://nuxt:3000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Also I've tried this nginx config
upstream nuxt {
server nuxt:3000;
}
server {
listen 80;
server_name dev.iceik.com.ua;
location / {
proxy_pass http://nuxt;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
And finally my docker-compose file
version: "3"
services:
nuxt:
build: ./app/
container_name: nuxt
restart: always
ports:
- "3000:3000"
command:
"npm run start"
nginx:
image: nginx:1.17
container_name: nginx
ports:
- "80:80"
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- nuxt
I can ping nuxt container from nginx container
Also here are opened ports
So, the expected result is that I can access my nuxt application.
However I'm getting 502 Bad Gateway
Do you have any ideas why nginx doesn't expose my nuxt application?
Thank you for any suggestions!
Nodejs is exposed localhost:3000 instead of 0.0.0.0:3000
Please correct it. It will work
Always good that your containers put into a network if they need to talk each other, other way is to use Host network(only works in linux). Try below docker-compose.yml they should be able to talk each other from the container names.
version: "3"
services:
nuxt:
build: ./app/
container_name: nuxt
restart: always
ports:
- "3000:3000"
command:
"npm run start"
networks:
- my_net
nginx:
image: nginx:1.17
container_name: nginx
ports:
- "80:80"
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- nuxt
networks:
- my_net
networks:
my_net:
driver: "bridge"

problem trying to make a reverse proxy on a flask gunicorn application with nginx and docker

i'm trying to make a reverse proxy and dockerize it for my flask application with nginx, gunicorn, docker and docker-compose . Before that the nginx part was in the same container than the web app, i'm trying to separe it.
My docker_compose yaml file is :
version: '3.6'
services:
nginx:
restart: always
build: ./nginx/
ports:
- 8008:8008
networks:
- web_net
flask_min:
build: .
image: flask_min
container_name: flask_min
expose:
- "8008"
networks:
- web_net
depends_on:
- nginx
networks:
web_net:
driver: bridge
My dockerfile is :
FROM python:3.6
MAINTAINER aurelien beliard (email#domain.com)
RUN apt update
COPY . /usr/flask_min
WORKDIR /usr/flask_min
RUN useradd -r -u 20979 -ms /bin/bash aurelien.beliard
RUN pip3 install -r requirements.txt
CMD gunicorn -w 3 -b :8008 app:app
my nginx docker file is
FROM nginx
COPY ./flask_min /etc/nginx/sites-available/
RUN mkdir /etc/nginx/sites-enabled
RUN ln -s /etc/nginx/sites-available/flask_min /etc/nginx/sites-enabled/flask_min
my nginx config file in /etc/nginx sites-available and sites-enabled is named flask-min :
server {
listen 8008;
server_name http://192.168.16.241/ ;
charset utf-8;
location / {
proxy_pass http://flask_min:8008;
} }
the requirements.txt file is :
Flask==0.12.2
grequests==0.3.0
gunicorn==19.7.1
Jinja2==2.10
The 2 containers are well created, gunicorn start well but i can't access to the application and there is nothing in the nginx access and error log .
If you have any idea it will be very appreciated.
ps sorry for the fault english is not my native language.
As mentioned in Maxm's answer, flask is depending on nginx to startup first. One way to fix it is to reverse the dependency order, but I think there's a more clever solution that doesn't require the dependency.
Nginx tries to do some optimization by caching the dns results of proxy_pass, but you can make it more flexible by setting it to a variable. This allows you to freely restart flask without having to also restart nginx.
Here's an example:
resolver 127.0.0.11 ipv6=off;
set $upstream http://flask_min:8008;
proxy_pass $upstream;
server_name should just be the host. try localhost or just _.
you can also do multiple hosts: server_name 192.168.16.241 localhost;
The depends_on should be on nginx not flask_min. Remove it from flask and add:
depends_on:
- flask_min
To nginx.
See if that works, let me know if you run into any more snags.

Resources