Dockerfile
Docker run command : docker run -itd -p 8080:80 prod
FROM node:16-alpine as builder
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
COPY . /app/
RUN npm install --silent
RUN npm install react-scripts#4.0.3 -g --silent
RUN npm run build
# production environment
FROM nginx:1.21.1-alpine
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
defalut.conf file
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /etc/nginx/sites-available/cert.crt;
ssl_certificate_key /etc/nginx/sites-available/ssl.key;
server_name ipaddress;
location / {
proxy_pass http://localhost:8080;
try_files $uri /index.html;
}
I am unable to see my index.html file running on my ip Address in https. its working fine with http://ipaddress:8080. Above is the Configuration File of DockerFile & default.conf file. nothing is showing in server logs.
I want to know that is the above configuration is correct or esle how to deploy react-app using Docker & SSL & Nginx
By Looking at your comments it looks like your port configuration is not correct, in NginX port listening is set to listen on port 443, but your docker port configuration is using port 80 as host port. Assuming Node server is listening at port 8080, docker run should be like this
$ docker run -itd -p 443:443 prod
And try https://ipaddress , based on certificate setting you should see either warning in browser (if certificate is not trusted fully, you might need to add it as an exception), or see proper contents.
Related
I create the following docker image
# Stage 0, "build-stage", based on Node.js, to build and compile the frontend
FROM tiangolo/node-frontend:10 as build-stage
WORKDIR /app
COPY package*.json /app/
RUN npm install
COPY ./ /app/
RUN CI=true npm test
RUN npm run build
# Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
FROM nginx:1.15
COPY --from=build-stage /app/build/ /usr/share/nginx/html
# Copy the default nginx.conf provided by tiangolo/node-frontend
COPY ./nginx.conf /etc/nginx/conf.d/v2.myapp.io
Where the nginx.conf is
server {
listen 80;
server_name serverip v2.myapp.io www.v2.myapp.io;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
include /etc/nginx/extra-conf.d/*.conf;
}
And then i execute the following sh
docker build -t mycontainer .
docker push mycontainer:latest
ssh root#ip 'docker pull mycontainer:latest'
ssh root#ip 'docker stop mycontainer'
ssh root#ip 'docker rm mycontainer'
ssh root#ip 'sudo docker run -p 8080:80 -it -d --name mycontainer mycontainer'
Well it works fine, but the address v2.myapp.io points to a blank nginx page, so i guess i have some sort of missconfiguration. But im not sure what else can be happening
Can someone help with this issue?
I remember that when I used to do all this manually, I had a sites-enabled, and a sites-available folder, i have tried creating them and puting the nginx.conf file in but i had no luck
What is failing there?
I have a nextjs project that I wish to run using Docker and nginx.
I wish to use nginx that connects to nextjs behind the scenes (only nginx can talk to nextjs, user needs to talk to nginx to talk to nextjs).
Assuming it's standard nextjs project structure and the dockerfile content (provided below), Is there a way to use nginx in docker with nextjs?
I'm aware I can use Docker-compose. But I'd like to keep it under one docker image. Since I plan to push the image to heroku web hosting.
NOTE: I'm using Server Side Rendering
dockerfile
# Base on offical Node.js Alpine image
FROM node:latest as builder
# Set working directory
WORKDIR /usr/app
# install node-prune (https://github.com/tj/node-prune)
RUN curl -sfL https://install.goreleaser.com/github.com/tj/node-prune.sh | bash -s -- -b /usr/local/bin
# Copy package.json and package-lock.json before other files
# Utilise Docker cache to save re-installing dependencies if unchanged
COPY package.json ./
COPY yarn.lock ./
# Install dependencies
RUN yarn install --frozen-lockfile
# Copy all files
COPY ./ ./
# Build app
RUN yarn build
# remove development dependencies
RUN yarn install --production
# run node prune. Reduce node_modules size
RUN /usr/local/bin/node-prune
#######################################################
FROM node:alpine
WORKDIR /usr/app
# COPY package.json next.config.js .env* ./
# COPY --from=builder /usr/app/public ./public
COPY --from=builder /usr/app/.next ./.next
COPY --from=builder /usr/app/node_modules ./node_modules
EXPOSE 3000
CMD ["node_modules/.bin/next", "start"]
dockerfile inspired by https://github.com/vercel/next.js/blob/canary/examples/with-docker/Dockerfile.multistage
Edit: nginx default.conf
upstream nextjs_upstream {
server nextjs:3000;
# We could add additional servers here for load-balancing
}
server {
listen 80 default_server;
server_name _;
server_tokens off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
location / {
proxy_pass http://nextjs_upstream;
}
}
In order to be able to use Nginx and NextJS together in a single Docker container without using Docker-Compose, you need to use Supervisord
Supervisor is a client/server system that allows its users to control
a number of processes on UNIX-like operating systems.
The issue wasn't nginx config or the dockerfile. It was running both nginx and nextjs when starting the container. Since I couldn't find a way to run both, using supervisord was the tool I needed.
The following will be needed for it to work
Dockerfile
# Base on offical Node.js Alpine image
FROM node:latest as builder
# Set working directory
WORKDIR /usr/app
# Copy package.json and package-lock.json before other files
# Utilise Docker cache to save re-installing dependencies if unchanged
COPY package.json ./
COPY yarn.lock ./
# Install dependencies
RUN yarn install --frozen-lockfile
# Copy all files
COPY ./ ./
# Build app
RUN yarn build
# remove development dependencies
RUN yarn install --production
#######################################################
FROM nginx:alpine
WORKDIR /usr/app
RUN apk add nodejs-current npm supervisor
RUN mkdir mkdir -p /var/log/supervisor && mkdir -p /etc/supervisor/conf.d
# Remove any existing config files
RUN rm /etc/nginx/conf.d/*
# Copy nginx config files
# *.conf files in conf.d/ dir get included in main config
COPY ./.nginx/default.conf /etc/nginx/conf.d/
# COPY package.json next.config.js .env* ./
# COPY --from=builder /usr/app/public ./public
COPY --from=builder /usr/app/.next ./.next
COPY --from=builder /usr/app/node_modules ./node_modules
# supervisor base configuration
ADD supervisor.conf /etc/supervisor.conf
# replace $PORT in nginx config (provided by executior) and start supervisord (run nextjs and nginx)
CMD sed -i -e 's/$PORT/'"$PORT"'/g' /etc/nginx/conf.d/default.conf && \
supervisord -c /etc/supervisor.conf
supervisor.conf
[supervisord]
nodaemon=true
[program:nextjs]
directory=/usr/app
command=node_modules/.bin/next start
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=true
[program:nginx]
command=nginx -g 'daemon off;'
killasgroup=true
stopasgroup=true
redirect_stderr=true
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=true
nginx config (default.conf)
upstream nextjs_upstream {
server localhost:3000;
# We could add additional servers here for load-balancing
}
server {
listen $PORT default_server;
server_name _;
server_tokens off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
location / {
proxy_pass http://nextjs_upstream;
}
}
NOTE: using nginx as a reverse proxy. NextJS will be running on port 3000. The user won't be able to reach it directly. It has to go through nginx.
Building docker image
docker build -t nextjs-img -f ./dockerfile .
Running docker container
docker run --rm -e 'PORT=80' -p 8080:80 -d --name nextjs-img nextjs-img:latest
Go to localhost:8080
You can use docker-compose to run Nginx and your NextJS app in Docker container, then have a bridge network between those containers.
then in nginx conf:
server {
listen 80;
listen 443 ssl;
server_name localhost [dns].com;
ssl_certificate certs/cert.pem;
ssl_certificate_key certs/cert.key;
location / {
proxy_pass http://nextApplication; // name based on your docker-compose file
proxy_http_version 1.1;
proxy_read_timeout 90;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host [dns].com;
proxy_cache_bypass $http_upgrade;
}
}
I didn't use upstream script, loadbalancer is on top of nginx (at the cloud provider level)
I have a react app whihc I can perfectly run it without doccker on the VM by running "npm run start"..However, when I make within docker images and run the docker it doesn't come up..my dockerfile is as follow:
FROM node:12
USER root
RUN mkdir -p /var/tmp/thermo && chown -R root:root /var/tmp/thermo
WORKDIR /var/tmp/thermo
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "run","start"]
I can succesffult create the docker image and then run it,
docker run -d -p 3000:3000 --name thermo-*** thermo-***
however the container always exits, the container logs is as follow:
[root#*****]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
03701ed96bca thermo-api "docker-entrypoint.s…" 17 minutes ago Exited (0) 17 minutes ago thermo-api-app
[root#t****]#
[root#****]# docker logs 03701ed96bca
> material-kit-pro-react#1.9.0 start /var/tmp/thermo
> export PORT=3000 && react-scripts start
ℹ 「wds」: Project is running at http://172.17.0.2/
ℹ 「wds」: webpack output is served from
ℹ 「wds」: Content not from webpack is served from /var/tmp/thermo/public
ℹ 「wds」: 404s will fallback to /
Starting the development server..
then when I curl to my website ("curl localhost:3000") nothing pops up,
I am not sure where I am doing wrong? Any help would be appreciated!
Try take another look at your code, the container is exit immediately after you started the container.
By the way, the npm run start is just for development environment, why don't you build the code and serve it by Nginx
My opinion is you should build your code then use Dockerfile example in your way, the Nginx will serve your app
Dockerfile
# build environment
FROM node:lts-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /app/package.json
RUN npm install
COPY . /app
RUN npm run build
RUN react-scripts build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
You can use this nginx.conf
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location /web {
alias /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
i've successfully dockerize my app using two docker image, one for nginx and second for the app and it runs well because i use docker compose.
Now i only want to have just one Dockerfile that contain app and nginx then run it on my local computer. How i could achieve that?
This is my nginx/default.conf
# Cache zone
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=STATIC:10m inactive=7d use_temp_path=off;
upstream nextjs {
server nextjs:3000;
}
server {
listen 80 default_server;
server_name _;
server_tokens off;
gzip on;
gzip_proxied any;
gzip_comp_level 4;
gzip_types text/css application/javascript image/svg+xml;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# BUILT ASSETS (E.G. JS BUNDLES)
# Browser cache - max cache headers from Next.js as build id in url
# Server cache - valid forever (cleared after cache "inactive" period)
location /_next/static {
proxy_cache STATIC;
proxy_pass http://nextjs;
}
# STATIC ASSETS (E.G. IMAGES)
# Browser cache - "no-cache" headers from Next.js as no build id in url
# Server cache - refresh regularly in case of changes
location /static {
proxy_cache STATIC;
proxy_ignore_headers Cache-Control;
proxy_cache_valid 60m;
proxy_pass http://nextjs;
}
# DYNAMIC ASSETS - NO CACHE
location / {
proxy_pass http://nextjs;
}
}
My /nginx/Dockerfile
FROM nginx:alpine as build
RUN rm /etc/nginx/conf.d/*
COPY ./default.conf /etc/nginx/conf.d/
EXPOSE 80
CMD [ "nginx", "-g", "daemon off;" ]
/Dockerfile [old]
FROM node:alpine
WORKDIR /usr/app
RUN npm install --global pm2
COPY ./package*.json ./
RUN npm install --production
COPY ./ ./
RUN npm run build
EXPOSE 3000
USER node
CMD [ "pm2-runtime", "start", "npm", "--", "start" ]
and this is the new Dockerfile
FROM node:alpine
WORKDIR /usr/app
RUN npm install --global pm2
COPY ./package*.json ./
RUN npm install --production
COPY ./ ./
# Build app
RUN npm run build
EXPOSE 3000
USER node
CMD [ "pm2-runtime", "start", "npm", "--", "start" ]
FROM nginx:stable-alpine
COPY --from=build /usr/app/.next /usr/share/nginx/html
RUN mkdir /usr/share/nginx/log
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/default.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I can build it, but whenever i run the image will throw error
host not found in upstream "nextjs:3000" in /etc/nginx/conf.d/default.conf:5
thanks to #octagon_octopus
I finally solve this problem by changing my nginx/default.conf
my Dockerfile
# build react app, it should be /build
# FROM node:12.2.0-alpine as build
FROM node:13-alpine as build
WORKDIR /app
COPY package.json /app/package.json
RUN npm install --only=prod
COPY . /app
RUN npm run build
# Creating nginx image and copy build folder from above
# FROM nginx:1.16.0-alpine
FROM nginx:stable-alpine
RUN mkdir /usr/share/nginx/buffer
COPY --from=build /app/.next /usr/share/nginx/buffer
COPY --from=build /app/deploy.sh /usr/share/nginx/buffer
RUN chmod +x /usr/share/nginx/buffer/deploy.sh
RUN cd /usr/share/nginx/buffer && ./deploy.sh
RUN mkdir /usr/share/nginx/log
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
and nginx/default.conf
server {
listen 80;
location / {
root /usr/share/nginx/html/pages;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html/pages;
}
error_log /usr/share/nginx/log/error.log warn;
access_log /usr/share/nginx/log/access.log;
}
The reason you get host not found in upstream "nextjs:3000" in /etc/nginx/conf.d/default.conf:5 is because according to your nginx/default.conf, nginx will forward all received requests to http://nextjs. This worked before, because you probably had node running in a separate container called nextjs. Now you try to run nginx and node in the same container, so the nextjs container does not exist anymore and nginx has nothing to forward requests to.
It seems to me that you are trying to run a reverse proxy and node application within the same container, when running them in two separate containers should be more desirable like you had it before.
If you are just developing your node app locally, you won't need the nginx reverse proxy and you can just send requests to the node app directly, so only the node container is needed. When you deploy to production, you typically use something like an nginx reverse proxy for various reasons like SSL termination and load balancing. In that case you can deploy the nginx and node containers together.
If you really want to continue with your current approach, then you will probably have to forward the requests to http://localhost instead of http://nextjs, although I don't think that will be the only problem. Node is probably not running within your container either. You start the Node application with CMD [ "pm2-runtime", "start", "npm", "--", "start" ] in a multi-stage docker build and that node image will be discarded. You will have to start your Node application inside the nginx container instead.
I'm attempting to get a next.js app running in a docker container based on phusion/passenger-docker.
I have what I think is a complete setup based on passenger-docker documentation but I'm getting a 404 page from nginx.
A docker log dump shows that passenger or nginx, is looking for index.html
[error] 48#48: *1 "/home/app/nhe_app/index.html" is not found
My startup file is /home/app/nhe_app/server.js
Dockerfile final stage:
# Build production container from builder stage
FROM phusion/passenger-nodejs:1.0.8
# Set correct environment variables.
ENV HOME /root
ENV NODE_ENV=production
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
# Enable Nginx and Passenger
RUN rm -f /etc/service/nginx/down
WORKDIR /home/app/nhe_app
RUN rm /etc/nginx/sites-enabled/default
COPY --chown=app:app ./nhe_app.conf /etc/nginx/sites-enabled/nhe_app.conf
COPY --chown=app:app ./secret_key.conf /etc/nginx/main.d/secret_key.conf
COPY --chown=app:app ./gzip_max.conf /etc/nginx/conf.d/gzip_max.conf
COPY --chown=app:app --from=builder /app/server.js /app/.env /home/app/nhe_app/
COPY --chown=app:app --from=builder app/src /home/app/nhe_app/src
COPY --chown=app:app --from=builder app/node_modules /home/app/nhe_app/node_modules
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
nginx configuration - nhe_app.conf:
server {
listen 80;
server_name glen-mac.local;
root /home/app/nhe_app/server.js;
passenger_enabled on;
passenger_user app;
passenger_startup_file server.js;
}
I expect that passenger will start nginx and run my app.
When I build and start the docker container it seems to expect index.html.
I'm building the docker container with
docker image build -t nhe_app .
And running it with
docker container run --name nhe_app -p 80:3000 nhe_app
Browsing to http://glen-mac.local/ shows nginx's formatted 404 page.
How can I configure passenger-docker to look for and execute my server.js rather than index.html?
There are several subtle problems in the OP question.
Most notably, Passenger seems to require that the app root path, defined by root in the nginx configuration above, has a top level folder named public. This folder must not contain an index.html file and probably should be empty. This is shown in examples, but not spelled out as a hard requirement in the docs.
Second major error is that Passenger bypasses the port specified in the app's server.js (3000 in this case) and replaces it with the port specified in the nginx configuration. So the docker run command changed from:
docker container run --name nhe_app -p 80:3000 nhe_app
to
docker container run --name nhe_app -p 80:80 nhe_app.
Otherwise, the best advice I can give is:
Learn the Passenger basics through a local installation (without Docker). Get the Passenger demo app working.
Get your app working in that local Passenger installation.
Apply what you have learned to implementing your app in passenger-docker.
server {
listen 7063;
server_name localhost;
root /home/app/nhe_app;
passenger_enabled on;
passenger_min_instances 1;
passenger_max_request_queue_size 100; # default: 100
passenger_app_env staging; # NODE_ENV; default: staging
passenger_app_root /home/app/nhe_app;
passenger_app_type node;
passenger_startup_file server.js;
}