Nginx - host not found in upstream "web:8000" - docker

Sorry for repeating the topic but I can't solve the problem. I don't understand what I'm doing wrong. Always shows me error:
nginx_1 | 2021/07/27 14:26:38 [emerg] 1#1: host not found in upstream "web:8000" in /etc/nginx/conf.d/nginx.conf:2
nginx_1 | nginx: [emerg] host not found in upstream "web:8000" in /etc/nginx/conf.d/nginx.conf:2
docker-compose.prod.yml
version: '3.7'
services:
nginx:
build: ./nginx
ports:
- "1337:80"
restart: always
depends_on:
- web
web:
build:
context: .
dockerfile: Dockerfile.prod
command: gunicorn znajdki.wsgi:application --bind 0.0.0.0:8000
expose:
- "8000"
env_file:
- ./.env.prod
depends_on:
- db
db:
image: postgres
volumes:
- postgres:/var/lib/postgresql/data
env_file:
- ./.env.prod.db
volumes:
postgres:
nginx/Dockerfile
FROM nginx:stable-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
nginx/nginx.conf
upstream znajdki {
server web:8000;
}
server {
listen 80;
location / {
proxy_pass http://znajdki;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}

Related

How to setup phpmyadmin docker container Vultr

I'm trying to get phpmyadmin to work on my live server in Vultr. I have a full-stack react app for the front-end and express Node Js for the back-end as well as mysql for database and phpmyadmin to create tables and stuff. Both React app and Express Node js work, but phpmyadmin doesn't work.
Below is my docker-compose file:
version: '3.7'
services:
mysql_db:
image: mysql
container_name: mysql_container
restart: always
cap_add:
- SYS_NICE
volumes:
- ./data:/var/lib/mysql
ports:
- "3306:3306"
env_file:
- .env
environment:
MYSQL_ROOT_PASSWORD: "${MYSQL_ROOT_PASSWORD}"
MYSQL_HOST: "${MYSQL_HOST}"
MYSQL_DATABASE: "${MYSQL_DATABASE}"
MYSQL_USER: "${MYSQL_USER}"
MYSQL_PASSWORD: "${MYSQL_PASSWORD}"
networks:
- react_network
phpmyadmin:
depends_on:
- mysql_db
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin_container
restart: always
ports:
- "8080:80"
env_file:
- .env
environment:
- PMA_HOST=mysql_db
- PMA_PORT=3306
- PMA_ABSOLUTE_URI=https://my-site.com/admin
- PMA_ARBITRARY=1
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
networks:
- react_network
api:
restart: always
image: mycustomimage
ports:
- "3001:80"
container_name: server_container
env_file:
- .env
depends_on:
- mysql_db
environment:
MYSQL_HOST_IP: mysql_db
networks:
- react_network
client:
image: mycustomimage
ports:
- "3000:80"
restart: always
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
container_name: client_container
networks:
- react_network
nginx:
depends_on:
- api
- client
build: ./nginx
container_name: nginx_container
restart: always
ports:
- "443:443"
- "80"
volumes:
- ./nginx/conf/certificate.crt:/etc/ssl/certificate.crt:ro
- ./nginx/certs/private.key:/etc/ssl/private.key:ro
- ./nginx/html:/usr/share/nginx/html
networks:
- react_network
volumes:
data:
conf:
certs:
webconf:
html:
networks:
react_network:
Below is my nginx configuration file:
upstream client {
server client:3000;
}
upstream api {
server api:3001;
}
server {
listen 443 ssl http2;
server_name my-site.com;
ssl_certificate /etc/ssl/certificate.crt;
ssl_certificate_key /etc/ssl/private.key;
location / {
proxy_pass http://client;
}
location /admin {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://phpmyadmin:8080;
}
location /sockjs-node {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
server {
listen 80;
server_name my-site.com www.my-site.com;
return 301 https://my-site.com$request_uri;
}
I honestly don't know what I'm missing here! If anyone can help me please!!
I get a 502 Bad Gateway error!

Nginx docker configuration is failing

I have attempted to build a Django / Gunicorn / Nginx configuration to run on AWS. The database container is running separately. When performing the docker-compose build step, however, the nginx step is failing. The files are shown below:
The dockerfile.prod:
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
container_name: app
command: gunicorn The6ix.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
networks:
- dbnet
expose:
- 8000
environment:
aws_access_key_id: ${aws_access_key_id}
aws_secret_access_key: ${aws_secret_access_key}
nginx:
build: ./nginx
ports:
- 1337:80
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
depends_on:
- web
volumes:
static_volume:
media_volume:
networks:
dbnet:
external: true
My nginx Dockerfile (in ./nginx folder):
FROM nginx:1.21-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
My nginx.conf file (in ./nginx folder):
upstream The6ix {
server web:8000;
}
server {
listen 80;
location / {
proxy_pass http://The6ix;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /home/app/web/staticfiles/;
}
location /media/ {
alias /home/app/web/mediafiles/;
}
The error log from my nginx container is as follows:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/01/03 17:33:59 [emerg] 1#1: host not found in upstream "web:8000" in /etc/nginx/conf.d/nginx.conf:2
nginx: [emerg] host not found in upstream "web:8000" in /etc/nginx/conf.d/nginx.conf:2
Some posts have mentioned usage of the volume creation step in the .yaml file as being the culprit. But is there a better way to sequence this to enable correct run of nginx?
I had suspected that the error was the result of my configuration copy. In the end, it was actually a network error. The following modifications corrected the issue:
In a shell:
docker network create nginx_network
Modification to the docker-compose.yaml file:
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
command: gunicorn The6ix.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
networks:
- dbnet
- nginx_network
ports:
- "8000:8000"
environment:
aws_access_key_id: ${aws_access_key_id}
aws_secret_access_key: ${aws_secret_access_key}
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
depends_on:
- web
networks:
- nginx_network
volumes:
static_volume:
media_volume:
networks:
dbnet:
external: true
nginx_network:
external: true

Docker-Compose: Service is not reachable after update

version: '3.7'
services:
shinyproxy:
build: /home/administrator/shinyproxy
deploy:
replicas: 1
placement:
constraints:
- node.hostname==testnode
user: root:root
hostname: shinyproxy
image: shinyproxy-example
restart: always
networks:
- sp-example-net
volumes:
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
- type: bind
source: /home/administrator/shinyproxy/application.yml
target: /opt/shinyproxy/application.yml
ports:
- 4000:4000
mariadb:
image: mariadb
networks:
- sp-example-net
volumes:
- type: bind
source: /home/administrator/mariadbdata
target: /var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root_password
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: xyz
deploy:
placement:
constraints:
- node.hostname==testnode
keycloak:
image: jboss/keycloak
networks:
- sp-example-net
volumes:
- type: bind
source: /home/administrator/compose/nginx/fullchain.pem
target: /etc/x509/https/tls.crt
- type: bind
source: /home/administrator/compose/nginx/privkey.pem
target: /etc/x509/https/tls.key
- ./theme/:/opt/jboss/keycloak/themes/custom/
environment:
- PROXY_ADDRESS_FORWARDING=true
- KEYCLOAK_USER=xyzasd
- KEYCLOAK_PASSWORD=xyz
ports:
- 8443:8443
deploy:
placement:
constraints:
- node.hostname==testnode
restart: "always"
nginx_service:
image: nginx_custom
ports:
- '80:80'
- '443:443'
build: ./nginx/
networks:
- sp-example-net
networks:
sp-example-net:
driver: overlay
external: true
attachable: true
This is my compose file. The keycloak service authenticates shinyproxy users. I use docker-compose up --build -d to get everything running and it workes. Sometimes I have to change small parts of my shinyproxy service and update everything with the same command: Changes get detected and the output looks like this:
compose_keycloak_1 is up-to-date
Recreating compose_shinyproxy_1 ... done
I am running the services in combination with nginx and get the following error:
nginx_service_1 | 2020/06/05 10:02:11 [error] 7#7: *54 connect() failed (113: No route to host) while connecting to upstream, client: 185.130.32.1, server: myserver.com, request: "GET / HTTP/1.1", upstream: "http://10.0.3.181:4000/", host: "myserver.com"
Running docker-compose down and then docker-compose up --build again works, but I do not want to take down all of my services just to update one.
Can anyone tell me why this might happen and how to solve it?
Edit: I am more and more sure this is an nginx issue, not a docker issue. Might that be the case?
My nginx.conf looks like this:
server {
listen 443;
server_name server.com;
ssl on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_certificate /etc/certs/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/certs/privkey.pem; # managed by Certbot
location / {
proxy_pass http://shinyproxy:4000; ### Übernahme der servicenamen aus Docker-compose
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 600s;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
}
....
edit2: Issue might be related to this:
https://github.com/docker/compose/issues/2003
Instead of doing docker-compose down and then docker-compose up --build for the whole project, you can actually, start that particular service by running docker-compose up -d serviceName. Have a look at the example.
-d stands for daemon/detached mode.
version: '3'
services:
test:
container_name: test
image: 'busybox'
command: 'sleep 5d'
test1:
container_name: test1
image: 'busybox'
command: 'sleep 4d'
$ docker-compose up -d
Creating network "proj_default" with the default driver
Creating test ... done
Creating test1 ... done
$ docker-compose ps
Name Command State Ports
--------------------------------
test sleep 5d Up
test1 sleep 5d Up
$ docker-compose up -d test1
Recreating test1 ... done
The solution was to restart nginx. It seems that like after every restart a container get will a new IP and nginx uses the old IP and will not be able to find it.
restart nginx container when upstream servers is updated
docker exec <nginx_container_id> nginx -s reload

Nginx deployed in docker container doesn't expose nuxtjs deployed in another docker container (502 Bad Gateway)

I'm trying to run nuxtjs application using nginx as proxy server in docker containers. So, I have 2 containers: nginx and nuxt
here is how I'm building nuxt application
FROM node:11.15
ENV APP_ROOT /src
RUN mkdir ${APP_ROOT}
WORKDIR ${APP_ROOT}
ADD . ${APP_ROOT}
RUN npm install
RUN npm run build
ENV host 0.0.0.0
The result seems to be fine
Next is nginx config
server {
listen 80;
server_name dev.iceik.com.ua;
location / {
proxy_pass http://nuxt:3000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Also I've tried this nginx config
upstream nuxt {
server nuxt:3000;
}
server {
listen 80;
server_name dev.iceik.com.ua;
location / {
proxy_pass http://nuxt;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
And finally my docker-compose file
version: "3"
services:
nuxt:
build: ./app/
container_name: nuxt
restart: always
ports:
- "3000:3000"
command:
"npm run start"
nginx:
image: nginx:1.17
container_name: nginx
ports:
- "80:80"
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- nuxt
I can ping nuxt container from nginx container
Also here are opened ports
So, the expected result is that I can access my nuxt application.
However I'm getting 502 Bad Gateway
Do you have any ideas why nginx doesn't expose my nuxt application?
Thank you for any suggestions!
Nodejs is exposed localhost:3000 instead of 0.0.0.0:3000
Please correct it. It will work
Always good that your containers put into a network if they need to talk each other, other way is to use Host network(only works in linux). Try below docker-compose.yml they should be able to talk each other from the container names.
version: "3"
services:
nuxt:
build: ./app/
container_name: nuxt
restart: always
ports:
- "3000:3000"
command:
"npm run start"
networks:
- my_net
nginx:
image: nginx:1.17
container_name: nginx
ports:
- "80:80"
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- nuxt
networks:
- my_net
networks:
my_net:
driver: "bridge"

Ngnix and docker compose - not return static files

This is my docker-file.yml
version: '2'
services:
nginx:
image: nginx:latest
ports:
- '80:80'
- '443:443'
volumes:
- ./conf.d:/etc/nginx/conf.d/
- ./logs/nginx_access.log:/var/log/nginx_access.log
- ./logs/nginx_error.log:/var/log/nginx_error.log
- ./src/app/static:/flask-app/src/app/static
depends_on:
- web
web:
build: ./
command: gunicorn manage:app --bind 0.0.0.0:8000 --access-logfile=logs/gunicorn_access_log.txt
ports:
- '8000:8000'
volumes:
- ./:/flask-app
environment:
DATABASE_URL: postgresql://postgres:pass#localhost/flask_deploy
REDIS_HOST: redis
SECRET_KEY: 'BbGd3qe$dsf1'
CONFIG_NAME: 'prod'
links:
- postgres:postgres
- redis:redis
depends_on:
- postgres
- redis
postgres:
image: postgres:9.4
volumes:
- ./psql-data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: 'pass'
POSTGRES_DB: 'flask_deploy'
ports:
- '5432:5432'
redis:
image: "redis:3.0-alpine"
command: redis-server
ports:
- '6379:6379'
And this is my ngnix config (web is name from docker-compose file) :
server {
listen 80;
server_name web;
# запись доступа и журналы ошибок в /var/log
access_log /var/log/nginx_access.log;
error_log /var/log/nginx_error.log;
location / {
# переадресация запросов приложений на сервер gunicorn
proxy_pass http://web:8000;
}
location /static {
# обрабатывать статические файлы напрямую, без пересылки в приложение
autoindex on;
alias /flask-app/src/app/static;
expires 1d;
}
}
And my site avaliable on 127.0.0.1 (without port). But.. I have trouble with static files. Flask url_for generate url like:
http://web:8000/static/img/do.jpg
And this link unavailable.
I can try this:
http://127.0.0.1:8000/static/img/do.jpg
And i saw picture. But this picture returned by gunicorn, not ngnix :(
I am beginner in docker-compose and ngnix. Maybe, some comments about my config? Thanks!
Solution:
proxy_set_header Host $host:8000;
Full config:
server {
listen 80;
server_name localhost;
root /flask-app/src/app;
access_log /var/log/nginx_access.log;
error_log /var/log/nginx_error.log;
location / {
proxy_set_header Host $host:8000;
proxy_pass http://web:8000;
}
location /static {
autoindex on;
expires 1d;
}
}

Resources