The issue I have is already asked here : How to use vue.js with Nginx? but trying the solutions didn't solve my problem.
So when I build my Dockerfile and go to localhost:8080 for example it works (reloading the page works too). When I navigate to a different page, let's say localhost:8080/add_app it shows the page the first time. But when I reload I'm getting an error:
Error in docker desktop:
This is mine Dockerfile:
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY ./platform-frontend/package*.json ./
RUN npm install
COPY ./platform-frontend .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY --from=build-stage /app/nginx/nginx.conf /etc/nginx/conf.d/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
This is mine nginx.conf file :
server {
listen 80;
server_name localhost;
location / {
root /app/dist;
index index.html index.html;
try_files $uri /index.html;
}
}
My project structure:
I solved the problem using following solution of maximkrouk from https://github.com/vuejs/v2.vuejs.org/issues/2818
Related
Dockerfile
Docker run command : docker run -itd -p 8080:80 prod
FROM node:16-alpine as builder
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
COPY . /app/
RUN npm install --silent
RUN npm install react-scripts#4.0.3 -g --silent
RUN npm run build
# production environment
FROM nginx:1.21.1-alpine
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
defalut.conf file
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /etc/nginx/sites-available/cert.crt;
ssl_certificate_key /etc/nginx/sites-available/ssl.key;
server_name ipaddress;
location / {
proxy_pass http://localhost:8080;
try_files $uri /index.html;
}
I am unable to see my index.html file running on my ip Address in https. its working fine with http://ipaddress:8080. Above is the Configuration File of DockerFile & default.conf file. nothing is showing in server logs.
I want to know that is the above configuration is correct or esle how to deploy react-app using Docker & SSL & Nginx
By Looking at your comments it looks like your port configuration is not correct, in NginX port listening is set to listen on port 443, but your docker port configuration is using port 80 as host port. Assuming Node server is listening at port 8080, docker run should be like this
$ docker run -itd -p 443:443 prod
And try https://ipaddress , based on certificate setting you should see either warning in browser (if certificate is not trusted fully, you might need to add it as an exception), or see proper contents.
I have a react app whihc I can perfectly run it without doccker on the VM by running "npm run start"..However, when I make within docker images and run the docker it doesn't come up..my dockerfile is as follow:
FROM node:12
USER root
RUN mkdir -p /var/tmp/thermo && chown -R root:root /var/tmp/thermo
WORKDIR /var/tmp/thermo
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "run","start"]
I can succesffult create the docker image and then run it,
docker run -d -p 3000:3000 --name thermo-*** thermo-***
however the container always exits, the container logs is as follow:
[root#*****]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
03701ed96bca thermo-api "docker-entrypoint.s…" 17 minutes ago Exited (0) 17 minutes ago thermo-api-app
[root#t****]#
[root#****]# docker logs 03701ed96bca
> material-kit-pro-react#1.9.0 start /var/tmp/thermo
> export PORT=3000 && react-scripts start
ℹ 「wds」: Project is running at http://172.17.0.2/
ℹ 「wds」: webpack output is served from
ℹ 「wds」: Content not from webpack is served from /var/tmp/thermo/public
ℹ 「wds」: 404s will fallback to /
Starting the development server..
then when I curl to my website ("curl localhost:3000") nothing pops up,
I am not sure where I am doing wrong? Any help would be appreciated!
Try take another look at your code, the container is exit immediately after you started the container.
By the way, the npm run start is just for development environment, why don't you build the code and serve it by Nginx
My opinion is you should build your code then use Dockerfile example in your way, the Nginx will serve your app
Dockerfile
# build environment
FROM node:lts-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /app/package.json
RUN npm install
COPY . /app
RUN npm run build
RUN react-scripts build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
You can use this nginx.conf
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location /web {
alias /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
i've successfully dockerize my app using two docker image, one for nginx and second for the app and it runs well because i use docker compose.
Now i only want to have just one Dockerfile that contain app and nginx then run it on my local computer. How i could achieve that?
This is my nginx/default.conf
# Cache zone
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=STATIC:10m inactive=7d use_temp_path=off;
upstream nextjs {
server nextjs:3000;
}
server {
listen 80 default_server;
server_name _;
server_tokens off;
gzip on;
gzip_proxied any;
gzip_comp_level 4;
gzip_types text/css application/javascript image/svg+xml;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# BUILT ASSETS (E.G. JS BUNDLES)
# Browser cache - max cache headers from Next.js as build id in url
# Server cache - valid forever (cleared after cache "inactive" period)
location /_next/static {
proxy_cache STATIC;
proxy_pass http://nextjs;
}
# STATIC ASSETS (E.G. IMAGES)
# Browser cache - "no-cache" headers from Next.js as no build id in url
# Server cache - refresh regularly in case of changes
location /static {
proxy_cache STATIC;
proxy_ignore_headers Cache-Control;
proxy_cache_valid 60m;
proxy_pass http://nextjs;
}
# DYNAMIC ASSETS - NO CACHE
location / {
proxy_pass http://nextjs;
}
}
My /nginx/Dockerfile
FROM nginx:alpine as build
RUN rm /etc/nginx/conf.d/*
COPY ./default.conf /etc/nginx/conf.d/
EXPOSE 80
CMD [ "nginx", "-g", "daemon off;" ]
/Dockerfile [old]
FROM node:alpine
WORKDIR /usr/app
RUN npm install --global pm2
COPY ./package*.json ./
RUN npm install --production
COPY ./ ./
RUN npm run build
EXPOSE 3000
USER node
CMD [ "pm2-runtime", "start", "npm", "--", "start" ]
and this is the new Dockerfile
FROM node:alpine
WORKDIR /usr/app
RUN npm install --global pm2
COPY ./package*.json ./
RUN npm install --production
COPY ./ ./
# Build app
RUN npm run build
EXPOSE 3000
USER node
CMD [ "pm2-runtime", "start", "npm", "--", "start" ]
FROM nginx:stable-alpine
COPY --from=build /usr/app/.next /usr/share/nginx/html
RUN mkdir /usr/share/nginx/log
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/default.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I can build it, but whenever i run the image will throw error
host not found in upstream "nextjs:3000" in /etc/nginx/conf.d/default.conf:5
thanks to #octagon_octopus
I finally solve this problem by changing my nginx/default.conf
my Dockerfile
# build react app, it should be /build
# FROM node:12.2.0-alpine as build
FROM node:13-alpine as build
WORKDIR /app
COPY package.json /app/package.json
RUN npm install --only=prod
COPY . /app
RUN npm run build
# Creating nginx image and copy build folder from above
# FROM nginx:1.16.0-alpine
FROM nginx:stable-alpine
RUN mkdir /usr/share/nginx/buffer
COPY --from=build /app/.next /usr/share/nginx/buffer
COPY --from=build /app/deploy.sh /usr/share/nginx/buffer
RUN chmod +x /usr/share/nginx/buffer/deploy.sh
RUN cd /usr/share/nginx/buffer && ./deploy.sh
RUN mkdir /usr/share/nginx/log
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
and nginx/default.conf
server {
listen 80;
location / {
root /usr/share/nginx/html/pages;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html/pages;
}
error_log /usr/share/nginx/log/error.log warn;
access_log /usr/share/nginx/log/access.log;
}
The reason you get host not found in upstream "nextjs:3000" in /etc/nginx/conf.d/default.conf:5 is because according to your nginx/default.conf, nginx will forward all received requests to http://nextjs. This worked before, because you probably had node running in a separate container called nextjs. Now you try to run nginx and node in the same container, so the nextjs container does not exist anymore and nginx has nothing to forward requests to.
It seems to me that you are trying to run a reverse proxy and node application within the same container, when running them in two separate containers should be more desirable like you had it before.
If you are just developing your node app locally, you won't need the nginx reverse proxy and you can just send requests to the node app directly, so only the node container is needed. When you deploy to production, you typically use something like an nginx reverse proxy for various reasons like SSL termination and load balancing. In that case you can deploy the nginx and node containers together.
If you really want to continue with your current approach, then you will probably have to forward the requests to http://localhost instead of http://nextjs, although I don't think that will be the only problem. Node is probably not running within your container either. You start the Node application with CMD [ "pm2-runtime", "start", "npm", "--", "start" ] in a multi-stage docker build and that node image will be discarded. You will have to start your Node application inside the nginx container instead.
I have installed nginx-extras from Ubuntu bash shell in my desktop Windows 10 O.S. The is required to spin up a docker container for an ASP.NET Core 3.1 Blazor web assembly application for serving the static web pages. My nginx.conf:
events { }
http {
include mime.types;
types {
application/wasm wasm;
}
server {
listen 80;
index index.html;
location / {
root /var/www/web;
try_files $uri $uri/ /index.html =404;
}
}
}
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
COPY . ./
RUN dotnet publish -c Release -o output
FROM nginx:alpine
WORKDIR /var/www/web
COPY --from=build-env /app/output/wwwroot .
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
My build command was successful.
However when I wanted to create a container using the command: docker run -p 8080:80 docker-wasm-blazor It gave me an error:
[emerg] 1#1: unknown directive "events" in /etc/nginx/nginx.conf:1 nginx: [emerg] unknown directive "events" in /etc/nginx/nginx.conf:1
I am very new to nginx and containerisation, so any help will be highly appreciated. Thanks.
I had the same error. Fixed by changing the encoding of the nginx.conf file. If you create the nginx.conf file in/with visual studio (Add -> new File) it's most likely that your will get the wrong encoding (other people got the same error).
In my case the encoding was UTF-8 and I needed us-ascii encoding. The easiest was to delete my nginx.conf file and recreate it outside of visual studio (I used Sublime text but I think you can use notepad).
I sorted out the issue by excluding the custom nginx.conf file to copy over the default nginx.conf file by removing the last line from the dockerfile. Now my dockerfile looks like this:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
COPY . ./
RUN dotnet publish -c Release -o output
FROM nginx
WORKDIR /usr/share/nginx/html
COPY --from=build-env /app/output/wwwroot/ .
I got the same issue and fixed it by removing the white spaces from before and after "events" and "http", so my config file become:
events{}
http{
include mime.types;
types {
application/wasm wasm;
}
server {
listen 80;
# Here, we set the location for Nginx to serve the files
# by looking for index.html
location / {
root /usr/local/webapp/nginx/html;
try_files $uri $uri/ /index.html =404;
}
}
}
I want to redirect a user when they go to https to the http version of my website which is hosted in a Docker swarm.
I'm trying to do this using ngnix, however the setup that I'm using isn't working. I've created a new Core 2.0 Web App to try and get it working in the simplest context possible. In addition to the Web App I also have my Dockerfile:
FROM microsoft/aspnetcore-build:2.0 AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build nginx image to redirect http to https
FROM nginx:alpine
EXPOSE 80
COPY nginx.conf /etc/nginx/nginx.conf
# Build runtime image
FROM microsoft/aspnetcore:2.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "RedirectService.dll"]
and my nginx file:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://www.google.co.uk;
}
After building my image, I run it with docker run -p 8006:80 redirectservice. What I'm expecting to happen is that it will redirect me to Google when I navigate to http://localhost:8006, however no redirect happens.
Can anyone see anything that I'm doing wrong? Any help would be massively appreciated.
It's not redirecting you, because nginx process is not running.
Take a look at the nginx image Dockerfile (https://github.com/nginxinc/docker-nginx/blob/590f9ba27d6d11da346440682891bee6694245f5/mainline/alpine/Dockerfile) - last line is:
CMD ["nginx", "-g", "daemon off;"]
In your Dockerfile you replaced it with:
ENTRYPOINT ["dotnet", "RedirectService.dll"]
And thus the nginx is never started.
You need to create a sh script where you will run nginx and dotnet and wait until both of them ends (i.e. crashes).