I couldn't find an answer to my question from other similar questions.
So, I have two docker containers:
Next.JS web-app
nginx reverse proxy
NextJS container without nginx reverse proxy worked as expected.
Even more, I can log in to the nginx container with docker exec -it nginx sh and with curl read whose static files on Next.JS container. I also see static files in the folder from a shared volume.
I run them with docker-compose:
volumes:
nextjs-build:
version: '3.9'
services:
nginx:
image: arm64v8/nginx:alpine
container_name: nginx
ports:
- "80:80"
- "443:443"
networks:
- blog
restart: unless-stopped
depends_on:
- website-front
volumes:
- type: volume
source: nextjs-build
target: /nextjs
read_only: true
- type: bind
source: /etc/ssl/private/blog-ssl
target: /etc/ssl/private/
read_only: true
- type: bind
source: ./nginx/includes
target: /etc/nginx/includes
read_only: true
- type: bind
source: ./nginx/conf.d
target: /etc/nginx/conf.d
read_only: true
- type: bind
source: ./nginx/dhparam.pem
target: /etc/nginx/dhparam.pem
read_only: true
- type: bind
source: ./nginx/nginx.conf
target: /etc/nginx/nginx.conf
read_only: true
website-front:
build: ./website
container_name: website-front
ports:
- "3000"
networks:
- blog
restart: unless-stopped
volumes:
- nextjs-build:/app/.next
networks:
blog:
external:
name: nat
my nginx configs:
upstream nextjs_upstream {
server website-front:3000;
}
server {
listen 443 http2 ssl;
listen [::]:443 http2 ssl;
server_name website_url;
ssl_certificate /etc/ssl/private/chain.crt;
ssl_certificate_key /etc/ssl/private/server.key;
ssl_trusted_certificate /etc/ssl/private/ca.ca-bundle;
# access_log /var/log/nginx/host.access.log main;
# security
include includes/security.conf;
include includes/general.conf;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
location /_next {
proxy_pass http://nextjs_upstream/_next/;
}
location / {
proxy_pass http://nextjs_upstream;
}
}
Tried multiple nginx configurations for static route:
localhost /_next {
root /nextjs;
}
NextJS dockerfile:
FROM node:alpine AS builder
# this ensures we fix simlinks for npx, Yarn, and PnPm
RUN apk add --no-cache libc6-compat
RUN corepack disable && corepack enable
WORKDIR /app
COPY ./ ./
RUN yarn install --frozen-lockfile
RUN yarn build
ENV NODE_ENV production
CMD chown -R node:node /app/.next
EXPOSE 3000
USER node
CMD [ "yarn", "start" ]
With that config I can see my website, but for static files I got 404 through upstream.
So, the problem was in a wrong path for mime.types file in nginx.conf and default location paths in includes/general.conf file.
After I changed it from: mime.types to /etc/nginx/mime.types it started working again.
Related
I am trying to use nginx with docker-compose to route traffic for two different apps with different domain names. I want to be able to go to publisher.dev but I can only access that app from localhost:3000 (this is a react app) and I have another app which I want to access from widget.dev but I can only access from localhost:8080 (this is a Preact app). This is my folder structure and configs:
|-docker-compose.yml
|-nginx
|--default.conf
|--Dockerfile.dev
|-publisher
|--// react app
|--Dockerfile.dev
|-widget
|--// preact app (widget)
|--Dockerfile.dev
# default.conf
upstream publisher {
server localhost:3000;
}
upstream widget {
server localhost:8080;
}
server {
listen 80;
server_name publisher.dev;
location / {
proxy_pass http://publisher/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
server {
listen 80;
server_name widget.dev;
location / {
proxy_pass http://widget/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
nginx Dockerfile.dev
FROM nginx:stable-alpine
COPY ./default.conf /etc/nginx/conf.d/default.conf
publisher Dockerfile.dev (same as widget Dockerfile.dev)
# Specify the base image
FROM node:16-alpine
# Specify the working directory inside the container
WORKDIR /app
# copy the package json from your local hard drive to the container
COPY ./package.json ./
# install dependencies
RUN npm install
# copy files from local hard drive into container
# by copying the package.json and running npm install before copy files,
# this insures that a change to a file does not cause a re-run of npm-install
COPY ./ ./
# command to run when the container starts up
CMD ["npm", "run", "start"]
# build this docker container with:
# docker build -f Dockerfile.dev .
# run this container with:
# docker run <container id>
docker-compose.yml
version: '3'
services:
nginx:
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- 3050:80
restart: always
depends_on:
- publisher
- widget
publisher:
stdin_open: true
build:
dockerfile: Dockerfile.dev
context: ./publisher
volumes:
- /app/node_modules
- ./publisher:/app
ports:
- 3000:3000
environment:
VIRTUAL_HOST: publisher.dev
widget:
stdin_open: true
build:
dockerfile: Dockerfile.dev
context: ./widget
volumes:
- /app/node_modules
- ./widget:/app
ports:
- 8080:8080
environment:
VIRTUAL_HOST: widget.dev
hosts file
127.0.0.1 publisher.dev
127.0.0.1 widget.dev
why is your upstream trying to connect with
publisher and widget, shouldn't they connect to localhost:3000 and localhost:8080, let upstream server name be publisher and widget but connect them to localhost.
upstream publisher {
#server publisher:3000;
server localhost:3000;
}
I have a vue app running on the front-end with spring boot backend both on different containers.
I want to dockerize my vuejs app to pass environment variables from the docker-compose file to nginx.
My problem is that my nginx conf file is not picking up environment variables from docker-compose.
Docker Compose File
backend-service:
container_name: backend-service
image: backend-service-local
networks:
- app-network
ports:
- 8081:8080
restart: on-failure
depends_on:
postgresdb:
condition: service_healthy
vue-app:
container_name: vue-app
image: vue-app-local
networks:
- app-network
ports:
- 8080:80
environment:
VUE_APP_BASE_URL: http://backend-service:8080
restart: on-failure
depends_on:
backend-service:
condition: service_started
DOCKER FILE
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
NGINX CONF
# Run as a less privileged user for security reasons.
user nginx;
# #worker_threads to run;
# "auto" sets it to the #CPU_cores available in the system, and
# offers the best performance.
worker_processes auto;
events { worker_connections 1024; }
http {
sendfile on;
upstream docker-backend {
server ${VUE_APP_BASE_URL};
}
server {
# Hide nginx version information.
server_tokens off;
listen 80;
root /usr/share/nginx/html;
include /etc/nginx/mime.types;
location / {
try_files $uri $uri/ /index.html;
}
location /api/ {
proxy_pass http://docker-backend;
}
}
}
Please advise nginx docker image docs in the Using environment variables in nginx configuration section of the page.
The way the nginx docker image deals with environment variables is injecting them in runtime using the configs in the linked page
I am using active storage in rails 6 with Docker and Nginx. I am uploading an image through the rails console, i.e.,
object.images.attach(io: File.open("#{Rails.root}/image_path"), filename: "image.jpg")
It is uploading successfully, but the file is not getting stored in the specified location i.e., Rails.root.join("storage"). And for that, I am not getting the image files.
I am not sure if the issue is with Docker or Nginx or ActiveStorage
Please help...
app.Dockerfile
FROM ruby:3.0.0
RUN apt-get update \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY . /app
COPY ./entrypoint.sh /app
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
EXPOSE 3000
CMD ["bundle", "exec", "puma", "-C", "config/puma.rb", "-p", "3000"]
nginx.Dockerfile
FROM nginx
RUN apt-get update -qq && apt-get -y install apache2-utils
ENV RAILS_ROOT /app
WORKDIR $RAILS_ROOT
RUN mkdir log
COPY public public/
COPY ./nginx.conf /tmp/docker.nginx
RUN envsubst '$RAILS_ROOT' < /tmp/docker.nginx > /etc/nginx/conf.d/default.conf
EXPOSE 3000
CMD [ "nginx", "-g", "daemon off;" ]
nginx.conf
upstream app {
server 'app:3000';
}
server {
listen 3000;
server_name localhost;
keepalive_timeout 5;
root /app/public;
access_log /app/log/nginx.access.log;
error_log /app/log/nginx.error.log info;
if (-f $document_root/maintenance.html) {
rewrite ^(.*)$ /maintenance.html last;
break;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
if (-f $request_filename) {
break;
}
if (-f $request_filename/index.html) {
rewrite (.*) $1/index.html break;
}
if (-f $request_filename.html) {
rewrite (.*) $1.html break;
}
if (!-f $request_filename) {
proxy_pass http://app;
break;
}
}
location ~ ^(?!/rails/).+\.(jpg|jpeg|gif|png|ico|json|txt|xml)$ {
gzip_static on;
expires max;
add_header Cache-Control public;
try_files $uri =404;
error_page 404 /404.html;
}
location = /500.html {
root /app/current/public;
}
}
docker-compose.yml
version: '3.9'
services:
db:
image: mariadb:10.3.29
restart: unless-stopped
volumes:
- .:/my_app
- db-volume:/var/lib/mysql
ports:
- '3306:3306'
environment:
MARIADB_DATABASE_NAME: database
MARIADB_ROOT_USERNAME: username
MARIADB_ROOT_PASSWORD: password
networks:
- network_name
app:
build:
context: .
dockerfile: app.Dockerfile
container_name: container_name
command: bash -c "bundle exec puma -C config/puma.rb -p 3000"
restart: unless-stopped
volumes:
- .:/my_app
- bundle-volume:/usr/local/bundle
depends_on:
- db
networks:
- network_name
nginx:
build:
context: .
dockerfile: nginx.Dockerfile
restart: unless-stopped
depends_on:
- app
ports:
- 3000:3000
networks:
- network_name
volumes:
db-volume:
bundle-volume:
networks:
network_name:
config/storage.yml
test:
service: Disk
root: <%= Rails.root.join("tmp/storage") %>
local:
service: Disk
root: <%= Rails.root.join("storage") %>
I believe there is a naming convention issue in your docker-compose.yml
services:
db:
image: mariadb:10.3.29
restart: unless-stopped
volumes:
- .:/my_app
- db-volume:/var/lib/mysql
In volumes, as you have given the folder name my_app so files are created for the service my_app service, not for app service.
So .:/my_app should be changed to your application service name i.e .:/app. Modify all the volumes and it should work.
I am trying to find a way to publish nginx, express, and letsencrypt's ssl all together using docker-compose. There are many documents about this, so I referenced these and tried to make my own configuration, I succeed to configure nginx + ssl from this https://medium.com/#pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
So now I want to put sample nodejs express app into nginx + ssl docker-compose. But I don't know why, I get 502 Bad Gateway from nginx rather than express's initial page.
I am testing this app with my left domain, and on aws ec2 ubuntu16. I think there is no problem about domain dns and security rules settings. All of 80, 443, 3000 ports opened already. and When I tested it without express app it shows well nginx default page.
nginx conf in /etc/nginx/conf.d
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
server_tokens off;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
docker-compose.yml
version: '3'
services:
app:
container_name: express
restart: always
build: .
ports:
- '3000:3000'
nginx:
container_name: nginx
image: nginx:1.15-alpine
restart: unless-stopped
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
Dockerfile of express
FROM node:12.2-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
I think SSL works fine, but there are some problems between express app and nginx. How can I fix this?
proxy_pass http://localhost:3000
is proxying the request to the 3000 port on the container that is running nginx. What you instead want is to connect to the 3000 port of the container running express. For that, we need to do two things.
First, we make the express container visible to nginx container at a predefined hostname. We can use links in docker-compose.
nginx:
links:
- "app:expressapp"
Alternatively, since links are now considered a legacy feature, a better way is to use a user defined network. Define a network of your own with
docker network create my-network
and then connect your containers to that network in compose file by adding the following at the top level:
networks:
default:
external:
name: my-network
All the services connected to a user defined network can access each other via name without explicitly setting up links.
Then in the nginx.conf, we proxy to the express container using that hostname:
location / {
proxy_pass http://app:3000
}
Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link.
Define networks in your docker-compose.yml and configure your services with the appropriate network:
version: '3'
services:
app:
restart: always
build: .
networks:
- backend
expose:
- "3000"
nginx:
image: nginx:1.15-alpine
restart: unless-stopped
depends_on:
- app
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
networks:
- frontend
- backend
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
networks:
frontend:
backend:
Note: the app service no longer publish's it's ports to the host it only exposes port 3000 (ref. exposing and publishing ports), it is only available to services connected to the backend network. The nginx service has a foot in both the backend and frontend network to accept incoming traffic from the frontend and proxy the connections to the app in the backend (ref. multi-host networking).
With user-defined networks you can resolve the service name:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream app {
server app:3000 max_fails=3;
}
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
server_tokens off;
location / {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
}
Removing the container_name from your services makes it possible to scale the services: docker-compose up -d --scale nginx=1 app=3 - nginx will load balance the traffic in round-robin to the 3 app containers.
I think maybe a source of confusion here is the way the "localhost" designation behaves among running services in docker-compose. The way docker-compose orchestrates your containers, each of the containers understands itself to be "localhost", so "localhost" does not refer to the host machine (and if I'm not mistaken, there is no way for a container running on the host to access a service exposed on a host port, apart from maybe some security exploits). To demonstrate:
services:
app:
container_name: express
restart: always
build: .
ports:
- '2999:3000' # expose app's port on host's 2999
Rebuild
docker-compose build
docker-compose up
Tell container running the express app to curl against its own running service on port 3000:
$ docker-compose exec app /bin/bash -c "curl http://localhost:3000"
<!DOCTYPE html>
<html>
<head>
<title>Express</title>
<link rel='stylesheet' href='/stylesheets/style.css' />
</head>
<body>
<h1>Express</h1>
<p>Welcome to Express</p>
</body>
</html>
Tell app to try to that same service which we exposed on port 2999 on the host machine:
$ docker-compose exec app /bin/bash -c "curl http://localhost:2999"
curl: (7) Failed to connect to localhost port 2999: Connection refused
We will of course see this same behavior between running containers as well, so in your setup nginx was trying to proxy it's own service running on localhost:3000 (but there wasn't one, as you know).
Tasks
build NodeJS app
add SSL functionality from the box (that can work automatically)
Solution
https://github.com/evertramos/docker-compose-letsencrypt-nginx-proxy-companion
/ {path_to_the_project} /Docker-compose.yml
version: '3.7'
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
restart: always
container_name: nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
- ./vhost.d:/etc/nginx/vhost.d
- ./html:/usr/share/nginx/html
- ./conf.d:/etc/nginx/conf.d
ports:
- "443:443"
- "80:80"
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: letsencrypt
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./certs:/etc/nginx/certs:rw
- ./vhost.d:/etc/nginx/vhost.d:rw
- ./html:/usr/share/nginx/html:rw
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
api:
container_name: ${APP_NAME}
build:
context: .
dockerfile: Dockerfile
command: npm start --port ${APP_PORT}
expose:
- ${APP_PORT}
# ports:
# - ${APP_PORT}:${APP_PORT}
restart: always
environment:
VIRTUAL_PORT: ${APP_PORT}
VIRTUAL_HOST: ${DOMAIN}
LETSENCRYPT_HOST: ${DOMAIN}
LETSENCRYPT_EMAIL: ${LETSENCRYPT_EMAIL}
NODE_ENV: production
PORT: ${APP_PORT}
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
/ {path_to_the_project} /.env
APP_NAME=best_api
APP_PORT=3000
DOMAIN=api.site.com
LETSENCRYPT_EMAIL=myemail#gmail.com
Do not forget to connect DOMAIN to you server before you will run container there.
How it works?
just run docker-compose up --build -d
I have a personal website that leverages Flask, NginX, Gunicorn, and MySQL. It runs perfectly well, however, I am porting it over to a set of Docker containers (mostly to learn Docker).
I am currently facing an issue that I believe stems from my NginX configuration, as I am attempting to forward traffic to my_site to my_site:8000 [code block 1 below].
My problem is this: I finally got the forwarding to work, in that when I go to my_site.com, it renders my HTML (presumably through forwarding to Gunicorn's exposed port 8000). But, it does not format it using the bootstrap4 formatting. My terminal does show a 200 response for finding my main.css file, however. The strange part is that when I go to my_site:8000, it does properly format my pages!
Do any of you have an idea as to what could be my mix up? I've double checked my port exposures, my docker services references, etc. but cannot figure out what the differentiation is in specifying that port 8000, after what I believe was a successful implementation of the proxy_pass to port 8000 in my NginX configuration.
My docker-compose.yml file contents are shown below in code block 2
The NginX container is from the official NginX image on Docker hub
The MySQL container is from the official MariaDB image on Docker hub
The other containers are built upon Ubuntu 18.10 images. I simply downloaded Python, nano, requirements.txt, etc. on these.
Block 1 -- conf.d file for NginX
events { }
http {
upstream upstream-web {
server jonathanolson.us;
}
server {
listen 80;
server_name gunicornservice;
location /static {
alias /NHL-Project/flasksite/static;
}
location / {
proxy_pass http://gunicornservice:8000;
# include /etc/nginx/proxy_params;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Block 2 -- docker-compose.yml
version: '3.7'
networks:
default:
external:
name: nhlflasknetwork
services:
db:
restart: always
image: jonathanguy/mymariadb
ports:
- "3306:3306"
volumes:
- type: bind
source: /home/jonathan/NHL-Project
target: /NHL-Project
- type: volume
source: mynhldb
target: /var/lib/mysql
- type: volume
source: myConfig
target: /etc/mySecrets # Here, we will have the file /etc/mySecrets/config.py
environment:
- MYSQL_USER_FILE=/etc/mySecrets/mysql_user_file
- MYSQL_PASSWORD_FILE=/etc/mySecrets/mysql_user_password_file
web:
restart: always
image: jonathanguy/myflask
ports:
- "5000:5000"
volumes:
- type: bind
source: /home/jonathan/NHL-Project
target: /NHL-Project
- type: volume
source: myConfig
target: /etc/mySecrets # Here, we will have the file /etc/mySecrets/config.py
environment:
- MYSQL_ROOT_PASSWORD_FILE=/etc/mySecrets/mysql_root_password_file
- MYSQL_USER_FILE=/etc/mySecrets/mysql_user_file
- MYSQL_PASSWORD_FILE=/etc/mySecrets/mysql_user_password_file
depends_on:
- db # Tells docker that "web" can start once "db" is started and running
command: bash -c "python3 NHL-Project/flaskrun.py"
server:
build: ./myNginx
depends_on:
- web
volumes:
- type: bind # TODO -- Make this a volume mount for production
source: /home/jonathan/NHL-Project/flasksite/templates
target: /usr/share/nginx/html
- type: bind # TODO -- Make this a volume mount for production
source: /home/jonathan/NHL-Project/flasksite/static/favicon.ico
target: /usr/share/nginx/html/favicon.ico
- type: bind
source: /home/jonathan/NHL-Project/conf/conf.d
target: /etc/nginx/nginx.conf
ports:
- "80:80"
environment:
- NGINX_PORT=80
command: /bin/bash -c "chown -R nginx /usr/share/nginx/html && exec nginx -g 'daemon off;'"
gunicornservice:
image: jonathanguy/mygunicorn
depends_on:
- server
volumes:
- type: bind
source: /home/jonathan/NHL-Project
target: /NHL-Project
- type: volume
source: myConfig
target: /etc/mySecrets
ports:
- "8000:8000"
command: /bin/bash -c "gunicorn -w 5 flaskrun:app -b 0.0.0.0:8000"
working_dir: /NHL-Project
volumes:
mynhldb:
external: true
myConfig:
external: true
myCode:
external: true
I expect my full site to be rendered and formatted correctly when visiting my_site.com
Given my (successful?) implementation of a proxy_pass in the NginX config, I am getting all the HTML and successful finding of the main.css file.
I still have to visit my_site.com:8000 to have the html formatted using the specified bootstrap formatting.
Assuming form this snippet
location /static {
alias /NHL-Project/flasksite/static;
}
Your css files are placed hear.
This block is what is I guess is creating the issue as nginx service will try to find the file in its container directory, but this which be served from your web container.
Try removing that block from nginx conf file.
Also, if possible avoid binding volumes of the code where it is not required like nginx and MySQL.