I'm trying to build my system as below and all components are builded by docker.
nginx -> index.html(localhost:8080)
nginx -> airflow(localhost:8080/airflow/)
nginx -> flower(localhost:8080/flower/)
"http://localhost:8080" is worked and show index.html about nginx,
but when I type into "http://localhost:8080/airflow/" doesn't work and log output was like below. How can I fix this issue?
nginx-for-airflow_1 | 2022/08/04 09:09:42 [error] 30#30: *5 "/usr/share/nginx/html/airflow/index.html" is not found
and my code like below
nginx.conf
upstream airflow_webserver {
server airflow-webserver:8080;
}
upstream airflow_flower {
server flower:5555;
}
server {
root /;
listen 80;
server_name localhost;
charset utf-8;
# location ^~ / {
# deny all;
# }
location /airflow/ {
proxy_pass http://airflow_webserver;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
}
location /flower/ {
proxy_pass http://airflow_flower/;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
}
}
and I modified my airflow.cfg like below
base_url = http://airflow-webserver:8080
web_server_port = 8080
enable_proxy_fix = True
proxy_fix_x_port = 3
my docker-compose.yaml file, and I checked I can access flower url in "airflow-webserver" container using "curl http://flower:5555"
nginx-for-airflow:
image: nginx:latest
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- 8080:80
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/"]
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
flower:
condition: service_healthy
airflow-webserver:
condition: service_healthy
flower:
<<: *airflow-common
command: celery flower
# profiles:
# - flower
# ports:
# - 5555:5555
expose:
- 5555
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-webserver:
<<: *airflow-common
command: webserver
expose:
- 8080
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
You need to move the root directive into a location / {} block. All requests are matching the server-level root.
Related
I am using xibo-cms with docker and I would like to set an nginx proxy server for ssl purposes I created a docker-compose file with all containers but I always got error TOO_MANY_REDIRECTS when I set proxy_set_header Host $host; parameter without the parameter the url is redirected to the container service name which is unknown by the browser. I didn't understand what is wrong with my configuration.
my docker compose
version: "2.1"
services:
proxy:
image: nginx:1.23.2-alpine
volumes:
- ./conf/:/etc/nginx/conf.d/
- /etc/ssl/certs/STAR_mydomain.com.pem:/etc/ssl/certs/STAR_mydomain.com.pem
ports:
- "443:443"
- "80:80"
restart: always
cms-db:
image: mysql:5.7
volumes:
- "./shared/db:/var/lib/mysql:Z"
environment:
- MYSQL_DATABASE=cms
- MYSQL_USER=cms
- MYSQL_RANDOM_ROOT_PASSWORD=yes
mem_limit: 1g
env_file: config.env
restart: always
cms-xmr:
image: xibosignage/xibo-xmr:0.9
ports:
- "9505:9505"
restart: always
mem_limit: 256m
env_file: config.env
cms-web:
image: xibosignage/xibo-cms:release-3.2.1
volumes:
- "./shared/cms/custom:/var/www/cms/custom:Z"
- "./shared/backup:/var/www/backup:Z"
- "./shared/cms/web/theme/custom:/var/www/cms/web/theme/custom:Z"
- "./shared/cms/library:/var/www/cms/library:Z"
- "./shared/cms/web/userscripts:/var/www/cms/web/userscripts:Z"
- "./shared/cms/ca-certs:/var/www/cms/ca-certs:Z"
restart: always
links:
- cms-db:mysql
- cms-xmr:50001
- proxy
environment:
- XMR_HOST=cms-xmr
- CMS_USE_MEMCACHED=true
- MEMCACHED_HOST=cms-memcached
env_file: config.env
# ports:
# - "8080:80"
mem_limit: 1g
cms-memcached:
image: memcached:alpine
command: memcached -m 15
restart: always
mem_limit: 100M
cms-quickchart:
image: ianw/quickchart
restart: always
and here is my nginx config
upstream docker-xibo {
server xiboo-cms-web-1:80;
}
server {
if ($host = display.mydomain.com) {
return 301 https://$host$request_uri;
}
listen 80;
server_name display.mydomain.com;
}
server {
listen 443 ssl;
server_name display.mydomain.com;
ssl_certificate /etc/ssl/certs/STAR_mydomain.com.pem;
ssl_certificate_key /etc/ssl/certs/STAR_mydomain.com.pem;
location / {
proxy_pass http://docker-xibo;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
Thank you
The problem was on the backend which have configured with https connection nginx proxy configuration was right.
I'm having trouble accessing a locally-hosted website. The idea is that a site hosted in a docker container and sitting behind an Nginx proxy should be accessible from the internet.
I have a hostname with NoIP, let's call it stuff.ddns.net.
I've set up IP updates to NoIP DNS servers (i.e., stuff.ddns.net always points to my router).
My router forwards ports 80 and 443 to a static IP on my local network (a Linux machine).
I'm hosting an Apache Airflow web server in a Docker container on aforementioned Linux machine, and I've set AIRFLOW__WEBSERVER__BASE_URL: 'https://stuff.ddns.net/airflow'.
When I try accessing stuff.ddns.net/airflow in my web browser, I get Safari can't open the page "stuff.ddns.net/airflow" because Safari can't connect to the server "stuff.ddns.net".
Here is my nginx.conf:
# top-level http config for websocket headers
# If Upgrade is defined, Connection = upgrade
# If Upgrade is empty, Connection = close
events {
worker_connections 1024;
}
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream airflow {
server localhost:8080;
}
server {
listen [::]:80;
server_name stuff.ddns.net;
return 302 https://$host$request_uri;
}
server {
listen [::]:443 ssl;
server_name stuff.ddns.net;
ssl_certificate /run/secrets/stuff_ddns_net_pem_chain;
ssl_certificate_key /run/secrets/stuff_ddns_net_key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /run/secrets/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
location /airflow/ {
proxy_pass http://airflow;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
Ideas?
EDIT: A truncated (i.e., other Airflow components left out) docker-compose.yml for full clarity of the setup:
version: '3.7'
x-airflow-common:
&airflow-common
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.4.0}
# build: .
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN_CMD: 'cat /run/secrets/sql_alchemy_conn'
AIRFLOW__CELERY__RESULT_BACKEND_CMD: 'cat /run/secrets/result_backend'
AIRFLOW__CELERY__BROKER_URL: redis://:#redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth'
AIRFLOW__WEBSERVER__BASE_URL: 'https://stuff.ddns.net/airflow'
AIRFLOW__WEBSERVER__ENABLE_PROXY_FIX: 'True'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
volumes:
- ./storage/airflow/dags:/opt/airflow/dags
- ./storage/airflow/logs:/opt/airflow/logs
- ./storage/airflow/plugins:/opt/airflow/plugins
user: "${AIRFLOW_UID:-1000}:0"
secrets:
- sql_alchemy_conn
- result_backend
- machine_pass
depends_on:
&airflow-common-depends-on
redis:
condition: service_healthy
postgres:
condition: service_healthy
x-stuff-common:
&stuff-common
restart: unless-stopped
networks:
- ${DOCKER_NETWORK:-stuff}
services:
nginx:
<<: *stuff-common
container_name: stuff-nginx
image: nginxproxy/nginx-proxy:alpine
hostname: nginx
ports:
- ${PORT_NGINX:-80}:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./config/nginx.conf:/etc/nginx/nginx.conf:ro
secrets:
- stuff_ddns_net_pem_chain
- stuff_ddns_net_key
- dhparam.pem
airflow-webserver:
<<: *stuff-common
<<: *airflow-common
container_name: stuff-airflow-webserver
command: webserver
ports:
- ${PORT_UI_AIRFLOW:-8080}:8080
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:${PORT_UI_AIRFLOW:-8080}/airflow/health"]
interval: 10s
timeout: 10s
retries: 5
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
networks:
stuff:
name: ${DOCKER_NETWORK:-stuff}
secrets:
... <truncated>
The solution here was threefold:
Docker container uses the same bridge network as all the other containers.
In the nginx.conf upstream declaration, replace localhost with the LAN IP address of the Docker host (this works for me since I'm using a statically-assigned address).
Add listen <PORT>; above the listen [::]:<PORT>; directives in nginx.conf (I'm not sure what this does, but everything breaks without this).
Here is what the top part of the Nginx.conf looks like now:
upstream airflow {
server 192.168.50.165:8080;
}
server {
listen 80;
listen [::]:80;
server_name stuff.ddns.net;
return 302 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name stuff.ddns.net;
.....
I have a docker container with strapi, nextjs and nginx. I have it set up so that if I navigate to front.development I hit the next front end and if I go to back.development I hit the strapi backend.
I can make a request to back.development/articles/some-article-title in postman and it works. However if I make a fetch or axios request to that same url in nextjs I get Error: connect ECONNREFUSED 127.0.0.1:80.
I'm stuck on how to resolve this. I've read the solution in a similar question here
Can't call my Laravel API from the Node.js container but I can call it from Postman but trying that solution results in a 404.
docker-compose.yml
version: "3"
services:
# NGINX reverse proxy
nginx:
image: nginx:1.17.10
container_name: nginx_reverse_proxy
restart: unless-stopped
depends_on:
- frontend
- strapi
volumes:
- ./reverse_proxy/nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80
# NextJS Front end
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
container_name: frontend
restart: unless-stopped
volumes:
- ./frontend:/srv/frontend
- /srv/frontend/node_modules
- /srv/frontend/.next
ports:
- 3000:3000
# Strapi CMS
strapi:
image: strapi/strapi
container_name: strapi
restart: unless-stopped
env_file: .env
environment:
DATABASE_CLIENT: ${DATABASE_CLIENT}
DATABASE_NAME: ${DATABASE_NAME}
DATABASE_HOST: ${DATABASE_HOST}
DATABASE_PORT: ${DATABASE_PORT}
DATABASE_USERNAME: ${DATABASE_USERNAME}
DATABASE_PASSWORD: ${DATABASE_PASSWORD}
volumes:
- ./app:/srv/app
ports:
- 1337:1337
# MongoDB database
db:
image: mongo
container_name: db
restart: unless-stopped
env_file: .env
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
volumes:
- strapidata:/data/db
ports:
- 27017:27017
networks:
wellington-network:
driver: bridge
volumes:
strapidata:
Nginx config
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name front.development;
location / {
proxy_pass http://frontend:3000;
proxy_pass_request_headers on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
}
}
server {
listen 80;
server_name back.development;
location / {
proxy_pass http://strapi:1337;
proxy_pass_request_headers on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
}
}
}
I hope one of you can help me.
I have a website running Strapi in Docker as Backend. I use Nginx as a server. For now, I have used it with the original URL, but I want to run it over HTTPS with an upstream URL like dashboard.website.com.
My problem is that I don’t know how to create the server.js file to tell Strapi that it should allow another URL instead of the standard one. There are many guides but none showing how to create it with docker-compose.
Can one of you explain how I can create the server.js file for Strapi and make Strapi aware of it when I run Docker-compose?
Here is a copy of my docker-compose.yml file:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
ports:
- 8081:8081
restart: unless-stopped
networks:
- app-network
webserver:
image: nginx:stable-perl
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- web-root:/var/www/html
- ./server/nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
depends_on:
- nodejs
networks:
- app-network
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --webroot --webroot-path=/var/www/html --email [EMAIL ADDRESS] --agree-tos --no-eff-email --force-renewal -d [DOMAIN]
strapi:
container_name: strapi
image: strapi/strapi:3.6-alpine
environment:
- APP_NAME=strapi-app
- DATABASE_CLIENT=mongo
- DATABASE_HOST=db
- DATABASE_PORT=27017
- DATABASE_NAME=strapi
- DATABASE_USERNAME=
- DATABASE_PASSWORD=
- AUTHENTICATION_DATABASE=strapi
ports:
- 1337:1337
volumes:
- strapi-app:/srv/app
depends_on:
- db
restart: unless-stopped
networks:
- app-network
db:
container_name: mongo
image: mongo:4.4.5-bionic
environment:
- MONGO_INITDB_DATABASE=strapi
volumes:
- dbdata:/data/db
restart: unless-stopped
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
dbdata:
node_modules:
certbot-etc:
certbot-var:
strapi-app:
web-root:
driver: local
driver_opts:
type: none
device: /
o: bind
And here is a copy of my nginx configuration:
server {
listen 80;
listen [::]:80;
access_log off;
server_name [DOMAIN];
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
if ($http_user_agent ~ (LieBaoFast|UCBrowser|MQQBrowser|Mb2345Browser) ) {
return 403;
}
location / { return 301 https://[DOMAIN].org$request_uri; }
}
upstream dashboard {
server strapi:1337;
}
server {
listen 443 ssl;
server_name [DOMAINN];
access_log off;
ssl_certificate /etc/letsencrypt/live/[DOMAIN]/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/[DOMAIN]/privkey.pem;
if ($http_user_agent ~ (LieBaoFast|UCBrowser|MQQBrowser|Mb2345Browser) ) {
return 403;
}
# WEBSITE
location / {
proxy_pass http://nodejs:8081;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
# STRAPI - ADMIN
location /d {
#rewrite ^/d/?(.*)$ /$1 break;
proxy_pass http://dashboard;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass_request_headers on;
}
}
I'm new to docking on NGINX and I think I may have made some mistakes, but come on. I built my application and dockerized it normally and in the end on my own VPS I ran the 'docker-compe up' and got success, accessing my server's IP on the port I mapped on nginx it works, however the problem is when I'm configuring in a DNS, follow my code to explain better:
docker-compose.yml
version: "3"
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
networks:
- animes-lobby
environment:
POSTGRES_USER:
POSTGRES_DB:
POSTGRES_PASSWORD:
backend:
build: ./backend
command: >
bash -c "yarn build
&& yarn typeorm migration:run
&& yarn start:prod"
networks:
- animes-lobby
ports:
- "4000:4000"
volumes:
- ./backend:/backend
- ./backend/node_modules
depends_on:
- db
frontend:
build: ./frontend
command: >
bash -c "yarn build
&& yarn start"
networks:
- animes-lobby
ports:
- "3000:3000"
volumes:
- ./frontend:/frontend
depends_on:
- backend
networks:
animes-lobby:
driver: bridge
moment that I run the docker-compose up
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9bb32ec92e61 animeslobbycom_frontend "docker-entrypoint.s…" 30 seconds ago Up 28 seconds 0.0.0.0:3000-3001->3000-3001/tcp animeslobbycom_frontend_1
8b0f09b039e4 animeslobbycom_backend "docker-entrypoint.s…" 36 seconds ago Up 29 seconds 0.0.0.0:4000-4001->4000-4001/tcp animeslobbycom_backend_1
ed14f72e9db8 postgres "docker-entrypoint.s…" 36 seconds ago Up 36 seconds 0.0.0.0:5432->5432/tcp animeslobbycom_db_1
as mentioned, now I can access it via IP: PORT and it works perfectly, but when I do my virtualhost configuration:
server {
listen 80;
server_name animeslobby.com www.animeslobby.com;
location / {
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:3000;
}
}
I have as a Bad Gateway return in the domain response, someone who has been through this, can you help me?