Nginx is not working with docker-compose for react app - docker

I have created nginx config, DockerFile, Docker-compose file for the same.
nginx/nginx.conf
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
DockerFile
FROM node:13.12.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm ci --silent
RUN npm install react-scripts#3.4.1 -g --silent
COPY . ./
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
docker-compose.yml
version: "3.7"
services:
client:
container_name: client
build:
context: .
dockerfile: Dockerfile
ports:
- 3001:3000
Now after doing docker-compose up --build
I get the logs as
client | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
client | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
client | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
client | 10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
client | 10-listen-on-ipv6-by-default.sh: error: /etc/nginx/conf.d/default.conf differs from the packages version
client | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
client | /docker-entrypoint.sh: Configuration complete; ready for start up
I am not sure, if the problem is due to different packages version or something else but when tried to visit the url it says this site can’t be reached.

You didn't specify which url you are using, but it looks like your nginx is exposed on port 80. You need to expose that port in docker compose. Include under the ports:
- 80:80

Related

Upload file to dockerized app on the fly, then serve

I've got a Python web app that runs inside two Docker containers, one for the FastAPI backend, the other for the Vue.JS front-end (and there's besides a third one, with the Postgres Db). Now my task is to upload a file from inside the front-end client to the server, store it permanently and serve, so that I can use static URLs in my img tags.
A possible similar question is here: How to upload file outside Docker container in Flask app But it concerns an approach where the server must be restarted upon upload to reflect the changes. I need to do everything on the fly, as the app is running.
Dockerfile for front-end:
FROM node:16.14.2 as builder
WORKDIR /admin
COPY package*.json ./
COPY vite.config.js ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:1.21
COPY ./nginx/nginx.conf /etc/nginx/nginx.conf
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /admin/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Dockerfile for backend:
FROM tiangolo/uvicorn-gunicorn:python3.9
EXPOSE 8000
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
COPY ./requirements.txt /tmp/requirements.txt
RUN pip install --no-cache-dir -r /tmp/requirements.txt
WORKDIR /app
COPY . /app
I can certainly transfer a file (as raw bytes stream) from the front-end to the backend via Axios and receive it on the backend Python side. I can then process it however I like. But how can I store it in the location where the front-end container could read it and serve statically?
UPDATE (Use Docker Volumes)
As suggested in this question, I've tried to use a shared volume for my purpose. My COMPOSE file now looks like this:
version: "3.8"
services:
use_frontend:
container_name: 'use_frontend'
# --> ADDED <--
volumes:
- 'myshare:/etc/nginx'
- 'myshare:/usr/share/nginx/html'
build:
context: ./admin
dockerfile: Dockerfile
restart: always
depends_on:
- use_backend
ports:
- 8090:80
use_db:
container_name: use_db
image: postgres:14.2
# etc etc...
use_backend:
container_name: 'use_backend'
volumes:
# --> ADDED <--
- 'myshare:/usr/share/nginx/html'
build:
context: ./api
dockerfile: Dockerfile
restart: always
depends_on:
- use_db
# etc etc...
# --> ADDED <--
volumes:
myshare:
driver: local
The Dockerfile for use_frontend hasn't changed (must it?)
FROM node:16.14.2 as builder
WORKDIR /admin
# copy out files for npm
COPY package*.json ./
COPY vite.config.js ./
# install and build Vue.js
RUN npm install
COPY . .
RUN npm run build
# Nginx image
FROM nginx:1.21
# copy Nginx conf file to VOLUME mounted folder
COPY ./nginx/nginx.conf /etc/nginx/nginx.conf
# clean everything in VOLUME mounted folder (app entry point)
RUN rm -rf /usr/share/nginx/html/*
# copy compiled app to to VOLUME mounted folder (app entry point)
COPY --from=builder /admin/dist /usr/share/nginx/html
# expose port 80 for HTTP access
EXPOSE 80
# run Nginx
CMD ["nginx", "-g", "daemon off;"]
The Nginx conf file hasn't changed either:
events {}
http {
server {
listen 80;
# app entry point
root /usr/share/nginx/html;
# MIME types
include /etc/nginx/mime.types;
client_max_body_size 20M;
location / {
try_files $uri /index.html;
}
# etc etc...
}
}
But after doing
docker compose build
docker compose up
I'm getting file not found errors from Nginx:
use_frontend | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
use_frontend | /docker-entrypoint.sh: Configuration complete; ready for start up
use_frontend | 2022/07/24 23:21:32 [emerg] 1#1: open() "/etc/nginx/nginx.conf" failed (2: No such file or directory)
use_frontend | nginx: [emerg] open() "/etc/nginx/nginx.conf" failed (2: No such file or directory)
What exactly am I doing wrong with the Docker volume mounting?

Showing NginX welcome page after Dockerizing

I have Dockerized my Nextjs App. It builds fine. But after building, it keep showing the nginx welcome page, not the actual home page of my application. Tried many things, it doesn't fix. It remains the same on server too. What is the problem in my setup.
Here I am attaching my files.
This is my Dockerfile
FROM node:14.16.0 as builder
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY ./package.json /app/
RUN npm install
COPY . /app
RUN npm run build
FROM nginx:alpine
COPY --from=builder /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 3000
CMD ["nginx", "-g", "daemon off;"]
This is my Docker-compose.yml
version: "3.8"
services:
app:
container_name: kup-frontend
image: kup-frontend
build:
context: .
dockerfile: ./Dockerfile
ports:
- 3000:3000
volumes:
- .:/app
- /app/node_modules
- /app/build
restart: unless-stopped
environment:
- PORT=3000
This is my nginx.conf file
server {
listen 3000;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

Unable to load Vue app's js and css files with Nginx

I'm trying to deploy a Vue js app using docker-compose and Nginx. I'm pretty beginner with Nginx and after searching through StackOverflow and blogs, the website returns blank page (containing vue's index.html page, but unable to load any js or CSS file).
Here is my config:
# docker-compose.yml
version: "3.8"
services:
web:
build: ./web
ports:
- 8080:8080
volumes:
- ./dist:/app
depends_on:
- nginx
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- ./dist:/app
restart: always
# ./web/Dockerfile
FROM node:lts-alpine as build-stage
RUN npm install -g http-server
WORKDIR /app
COPY package*.json ./
RUN npm cache clean --force && npm install
COPY . .
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
# ./nginx/Dockerfile
FROM nginx:1.21-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
upstream client {
server web:8080;
}
server {
listen 80;
include /etc/nginx/mime.types;
location / {
proxy_pass http://client;
root /app;
index index.html;
try_files $uri $uri/ /index.html;
include /etc/nginx/mime.types;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location ~ \.css {
add_header Content-Type text/css;
}
location ~ \.js {
add_header Content-Type application/x-javascript;
}
}
Brower console when I open the website:
And nginx log show the following error:
[error] 22#22: *12 open() "/etc/nginx/html/css/app.68c1e752.css" failed (2: No such file or directory),
What should I do to fix this issue?
What you'd typically do is have a single, multi-stage frontend image like in the Dockerize Vue.js App Real-World Example that builds your app and copies the dist contents into NGINX's webroot. You don't need separate NGINX and http-server images because NGINX is already a server (and a much better one)
# docker-compose.yml
version: "3.8"
services:
nginx:
build: ./nginx
ports:
- 80:80
restart: always
# ./nginx/Dockerfile
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Your NGINX config really only needs the try_files declaration for HTML5 History Mode.

My react docker webpage does not bind to the host

I have a react app whihc I can perfectly run it without doccker on the VM by running "npm run start"..However, when I make within docker images and run the docker it doesn't come up..my dockerfile is as follow:
FROM node:12
USER root
RUN mkdir -p /var/tmp/thermo && chown -R root:root /var/tmp/thermo
WORKDIR /var/tmp/thermo
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "run","start"]
I can succesffult create the docker image and then run it,
docker run -d -p 3000:3000 --name thermo-*** thermo-***
however the container always exits, the container logs is as follow:
[root#*****]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
03701ed96bca thermo-api "docker-entrypoint.s…" 17 minutes ago Exited (0) 17 minutes ago thermo-api-app
[root#t****]#
[root#****]# docker logs 03701ed96bca
> material-kit-pro-react#1.9.0 start /var/tmp/thermo
> export PORT=3000 && react-scripts start
ℹ 「wds」: Project is running at http://172.17.0.2/
ℹ 「wds」: webpack output is served from
ℹ 「wds」: Content not from webpack is served from /var/tmp/thermo/public
ℹ 「wds」: 404s will fallback to /
Starting the development server..
then when I curl to my website ("curl localhost:3000") nothing pops up,
I am not sure where I am doing wrong? Any help would be appreciated!
Try take another look at your code, the container is exit immediately after you started the container.
By the way, the npm run start is just for development environment, why don't you build the code and serve it by Nginx
My opinion is you should build your code then use Dockerfile example in your way, the Nginx will serve your app
Dockerfile
# build environment
FROM node:lts-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /app/package.json
RUN npm install
COPY . /app
RUN npm run build
RUN react-scripts build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
You can use this nginx.conf
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location /web {
alias /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

Dockerize next.js and nginx at one Dockerfile

i've successfully dockerize my app using two docker image, one for nginx and second for the app and it runs well because i use docker compose.
Now i only want to have just one Dockerfile that contain app and nginx then run it on my local computer. How i could achieve that?
This is my nginx/default.conf
# Cache zone
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=STATIC:10m inactive=7d use_temp_path=off;
upstream nextjs {
server nextjs:3000;
}
server {
listen 80 default_server;
server_name _;
server_tokens off;
gzip on;
gzip_proxied any;
gzip_comp_level 4;
gzip_types text/css application/javascript image/svg+xml;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# BUILT ASSETS (E.G. JS BUNDLES)
# Browser cache - max cache headers from Next.js as build id in url
# Server cache - valid forever (cleared after cache "inactive" period)
location /_next/static {
proxy_cache STATIC;
proxy_pass http://nextjs;
}
# STATIC ASSETS (E.G. IMAGES)
# Browser cache - "no-cache" headers from Next.js as no build id in url
# Server cache - refresh regularly in case of changes
location /static {
proxy_cache STATIC;
proxy_ignore_headers Cache-Control;
proxy_cache_valid 60m;
proxy_pass http://nextjs;
}
# DYNAMIC ASSETS - NO CACHE
location / {
proxy_pass http://nextjs;
}
}
My /nginx/Dockerfile
FROM nginx:alpine as build
RUN rm /etc/nginx/conf.d/*
COPY ./default.conf /etc/nginx/conf.d/
EXPOSE 80
CMD [ "nginx", "-g", "daemon off;" ]
/Dockerfile [old]
FROM node:alpine
WORKDIR /usr/app
RUN npm install --global pm2
COPY ./package*.json ./
RUN npm install --production
COPY ./ ./
RUN npm run build
EXPOSE 3000
USER node
CMD [ "pm2-runtime", "start", "npm", "--", "start" ]
and this is the new Dockerfile
FROM node:alpine
WORKDIR /usr/app
RUN npm install --global pm2
COPY ./package*.json ./
RUN npm install --production
COPY ./ ./
# Build app
RUN npm run build
EXPOSE 3000
USER node
CMD [ "pm2-runtime", "start", "npm", "--", "start" ]
FROM nginx:stable-alpine
COPY --from=build /usr/app/.next /usr/share/nginx/html
RUN mkdir /usr/share/nginx/log
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/default.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I can build it, but whenever i run the image will throw error
host not found in upstream "nextjs:3000" in /etc/nginx/conf.d/default.conf:5
thanks to #octagon_octopus
I finally solve this problem by changing my nginx/default.conf
my Dockerfile
# build react app, it should be /build
# FROM node:12.2.0-alpine as build
FROM node:13-alpine as build
WORKDIR /app
COPY package.json /app/package.json
RUN npm install --only=prod
COPY . /app
RUN npm run build
# Creating nginx image and copy build folder from above
# FROM nginx:1.16.0-alpine
FROM nginx:stable-alpine
RUN mkdir /usr/share/nginx/buffer
COPY --from=build /app/.next /usr/share/nginx/buffer
COPY --from=build /app/deploy.sh /usr/share/nginx/buffer
RUN chmod +x /usr/share/nginx/buffer/deploy.sh
RUN cd /usr/share/nginx/buffer && ./deploy.sh
RUN mkdir /usr/share/nginx/log
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
and nginx/default.conf
server {
listen 80;
location / {
root /usr/share/nginx/html/pages;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html/pages;
}
error_log /usr/share/nginx/log/error.log warn;
access_log /usr/share/nginx/log/access.log;
}
The reason you get host not found in upstream "nextjs:3000" in /etc/nginx/conf.d/default.conf:5 is because according to your nginx/default.conf, nginx will forward all received requests to http://nextjs. This worked before, because you probably had node running in a separate container called nextjs. Now you try to run nginx and node in the same container, so the nextjs container does not exist anymore and nginx has nothing to forward requests to.
It seems to me that you are trying to run a reverse proxy and node application within the same container, when running them in two separate containers should be more desirable like you had it before.
If you are just developing your node app locally, you won't need the nginx reverse proxy and you can just send requests to the node app directly, so only the node container is needed. When you deploy to production, you typically use something like an nginx reverse proxy for various reasons like SSL termination and load balancing. In that case you can deploy the nginx and node containers together.
If you really want to continue with your current approach, then you will probably have to forward the requests to http://localhost instead of http://nextjs, although I don't think that will be the only problem. Node is probably not running within your container either. You start the Node application with CMD [ "pm2-runtime", "start", "npm", "--", "start" ] in a multi-stage docker build and that node image will be discarded. You will have to start your Node application inside the nginx container instead.

Resources