lets encrypt certificate with docker (MERN) - docker

Firstly I am novice at docker. But I am trying to deploy my MERN app to Google cloud platform. I have dockerized the react, node and nginx part. but I want to run the SSL certificates (lets encrypt) with Certbot. I couldn't find any related tutorial for certbot that I already have implemented. What procedure will be more precise for this?
Dokcerfile
# pull the Node.js Docker image
FROM node:alpine as builder
# create the directory inside the container
WORKDIR ./Frontend
# copy the package.json files from local machine to the workdir in container
COPY package*.json ./
# run npm install in our local machine
RUN npm install --force
# copy the generated modules and all other files to the container
COPY . ./
# the command that starts our app
RUN npm run build
#stage 2
FROM nginx
# WORKDIR /usr/share/nginx/html
COPY default.conf /etc/nginx/conf.d/default.conf
# Copies static resources from builder stage --from=builder
COPY --from=builder ./Frontend/build /var/www/html
EXPOSE 80
RUN chown nginx.nginx /var/www/html/ -R
default.confi (nginx)
server {
location /apipoint {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://nodeserver:8082;
}
location / {
root /var/www/html;
}
}
docker-compose.yaml
version: "3.8"
services:
nodeserver:
build:
context: ./Backend
ports:
- "8082:8082"
frontend:
build:
context: ./Frontend
ports:
- "80:80"
currently I am running docker-compose up to build and deploy the app to GCP. I want the certificate to be generated when I run this command.

Related

Upload file to dockerized app on the fly, then serve

I've got a Python web app that runs inside two Docker containers, one for the FastAPI backend, the other for the Vue.JS front-end (and there's besides a third one, with the Postgres Db). Now my task is to upload a file from inside the front-end client to the server, store it permanently and serve, so that I can use static URLs in my img tags.
A possible similar question is here: How to upload file outside Docker container in Flask app But it concerns an approach where the server must be restarted upon upload to reflect the changes. I need to do everything on the fly, as the app is running.
Dockerfile for front-end:
FROM node:16.14.2 as builder
WORKDIR /admin
COPY package*.json ./
COPY vite.config.js ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:1.21
COPY ./nginx/nginx.conf /etc/nginx/nginx.conf
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /admin/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Dockerfile for backend:
FROM tiangolo/uvicorn-gunicorn:python3.9
EXPOSE 8000
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
COPY ./requirements.txt /tmp/requirements.txt
RUN pip install --no-cache-dir -r /tmp/requirements.txt
WORKDIR /app
COPY . /app
I can certainly transfer a file (as raw bytes stream) from the front-end to the backend via Axios and receive it on the backend Python side. I can then process it however I like. But how can I store it in the location where the front-end container could read it and serve statically?
UPDATE (Use Docker Volumes)
As suggested in this question, I've tried to use a shared volume for my purpose. My COMPOSE file now looks like this:
version: "3.8"
services:
use_frontend:
container_name: 'use_frontend'
# --> ADDED <--
volumes:
- 'myshare:/etc/nginx'
- 'myshare:/usr/share/nginx/html'
build:
context: ./admin
dockerfile: Dockerfile
restart: always
depends_on:
- use_backend
ports:
- 8090:80
use_db:
container_name: use_db
image: postgres:14.2
# etc etc...
use_backend:
container_name: 'use_backend'
volumes:
# --> ADDED <--
- 'myshare:/usr/share/nginx/html'
build:
context: ./api
dockerfile: Dockerfile
restart: always
depends_on:
- use_db
# etc etc...
# --> ADDED <--
volumes:
myshare:
driver: local
The Dockerfile for use_frontend hasn't changed (must it?)
FROM node:16.14.2 as builder
WORKDIR /admin
# copy out files for npm
COPY package*.json ./
COPY vite.config.js ./
# install and build Vue.js
RUN npm install
COPY . .
RUN npm run build
# Nginx image
FROM nginx:1.21
# copy Nginx conf file to VOLUME mounted folder
COPY ./nginx/nginx.conf /etc/nginx/nginx.conf
# clean everything in VOLUME mounted folder (app entry point)
RUN rm -rf /usr/share/nginx/html/*
# copy compiled app to to VOLUME mounted folder (app entry point)
COPY --from=builder /admin/dist /usr/share/nginx/html
# expose port 80 for HTTP access
EXPOSE 80
# run Nginx
CMD ["nginx", "-g", "daemon off;"]
The Nginx conf file hasn't changed either:
events {}
http {
server {
listen 80;
# app entry point
root /usr/share/nginx/html;
# MIME types
include /etc/nginx/mime.types;
client_max_body_size 20M;
location / {
try_files $uri /index.html;
}
# etc etc...
}
}
But after doing
docker compose build
docker compose up
I'm getting file not found errors from Nginx:
use_frontend | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
use_frontend | /docker-entrypoint.sh: Configuration complete; ready for start up
use_frontend | 2022/07/24 23:21:32 [emerg] 1#1: open() "/etc/nginx/nginx.conf" failed (2: No such file or directory)
use_frontend | nginx: [emerg] open() "/etc/nginx/nginx.conf" failed (2: No such file or directory)
What exactly am I doing wrong with the Docker volume mounting?

Unable to load Vue app's js and css files with Nginx

I'm trying to deploy a Vue js app using docker-compose and Nginx. I'm pretty beginner with Nginx and after searching through StackOverflow and blogs, the website returns blank page (containing vue's index.html page, but unable to load any js or CSS file).
Here is my config:
# docker-compose.yml
version: "3.8"
services:
web:
build: ./web
ports:
- 8080:8080
volumes:
- ./dist:/app
depends_on:
- nginx
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- ./dist:/app
restart: always
# ./web/Dockerfile
FROM node:lts-alpine as build-stage
RUN npm install -g http-server
WORKDIR /app
COPY package*.json ./
RUN npm cache clean --force && npm install
COPY . .
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
# ./nginx/Dockerfile
FROM nginx:1.21-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
upstream client {
server web:8080;
}
server {
listen 80;
include /etc/nginx/mime.types;
location / {
proxy_pass http://client;
root /app;
index index.html;
try_files $uri $uri/ /index.html;
include /etc/nginx/mime.types;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location ~ \.css {
add_header Content-Type text/css;
}
location ~ \.js {
add_header Content-Type application/x-javascript;
}
}
Brower console when I open the website:
And nginx log show the following error:
[error] 22#22: *12 open() "/etc/nginx/html/css/app.68c1e752.css" failed (2: No such file or directory),
What should I do to fix this issue?
What you'd typically do is have a single, multi-stage frontend image like in the Dockerize Vue.js App Real-World Example that builds your app and copies the dist contents into NGINX's webroot. You don't need separate NGINX and http-server images because NGINX is already a server (and a much better one)
# docker-compose.yml
version: "3.8"
services:
nginx:
build: ./nginx
ports:
- 80:80
restart: always
# ./nginx/Dockerfile
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Your NGINX config really only needs the try_files declaration for HTML5 History Mode.

Content-Type problem with Rails and NIGNX

I have a dockerized Ruby-on-Rails application which also uses NGINX.
When I build the containers (using docker-compose build && docker-compose up) the containers start, but the CSS and JavaScript files can't be loaded, because their Conten-Type is text/html.
What did I configure wrong? I tried several other solutions in the nginx.conf but nothing worked.
My configuration:
docker-compose.yml
version: '3'
services:
app:
container_name: trainee_manager_app
build:
context: .
dockerfile: ./docker/app/Dockerfile
environment:
- DB_USERNAME=postgres
- DB_PASSWORD=postgres
- DB_PORT=5432
depends_on:
- db
db:
image: postgres:13
container_name: trainee_manager_db
volumes:
- pg_data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=admin
server:
container_name: trainee_manager_server
build:
context: .
dockerfile: ./docker/web/Dockerfile
depends_on:
- app
ports:
- 86:80
networks:
default:
external:
name: trainee-manager-network
volumes:
pg_data:
app/Dockerfile:
FROM ruby:2.7.3
RUN apt-get update -qq
RUN apt-get install -y make autoconf libtool make gcc perl gettext gperf && git clone https://github.com/FreeTDS/freetds.git && cd freetds && sh ./autogen.sh && make && make install
# for postgres
RUN apt-get install -y libpq-dev
# for nokogiri
RUN apt-get install -y libxml2-dev libxslt1-dev
# for a JS runtime
RUN apt-get install -y nodejs
# Setting an Envioronment-Variable for the Rails App
ENV RAILS_ROOT /var/www/trainee_manager
RUN mkdir -p $RAILS_ROOT
# Setting the working directory
WORKDIR $RAILS_ROOT
# Setting up the Environment
ENV RAILS_ENV='production'
ENV RACK_ENV='production'
# Adding the Gems
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN bundle install --jobs 20 --retry 5 --without development test
# Adding all Project files
COPY . .
RUN bundle exec rake assets:clobber
RUN bundle exec rake assets:precompile
RUN ["chmod", "+x", "docker/app/entrypoint.sh"]
ENTRYPOINT ["docker/app/entrypoint.sh"]
EXPOSE 3000
CMD ["bundle", "exec", "puma", "-p", "3000"]
app/entrypoint.sh:
#!/bin/bash
# Remove a potentially pre-existing server.pid for Rails.
rm -f /app/tmp/pids/server.pid
bundle exec rake db:create RAILS_ENV=production
bundle exec rake db:migrate RAILS_ENV=production
bundle exec rake db:seed RAILS_ENV=production
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$#"
web/Dockerfile:
# Base Image
FROM nginx
# Dependiencies
RUN apt-get update -qq && apt-get -y install apache2-utils
# Establish where Nginx should look for files
ENV RAILS_ROOT /var/www/trainee_manager
# Working Directory
WORKDIR $RAILS_ROOT
# Creating the Log-Directory
RUN mkdir log
# Copy static assets
COPY public public/
# Copy the NGINX Config-Template
COPY docker/web/nginx.conf /tmp/docker.nginx
# substitute variable references in the Nginx config template for real values from the environment
# put the final config in its place
RUN envsubst '$RAILS_ROOT' < /tmp/docker.nginx > /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
web/nginx.conf:
upstream rails_app {
server app:3000;
}
server {
# Defining the Domain
server_name <SERVERNAME>;
client_max_body_size 200m;
# Define the public application root
root $RAILS_ROOT/public;
index index.html;
# define where Nginx should write its logs
access_log $RAILS_ROOT/log/nginx.access.log;
error_log $RAILS_ROOT/log/nginx.error.log;
# deny requests for files that should never be accessed
location ~ /\. {
deny all;
}
location ~* ^.+\.(rb|log)$ {
deny all;
}
# serve static (compiled) assets directly if they exist (for rails production)
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
try_files $uri #rails;
access_log off;
gzip_static on; # to serve pre-gzipped version
expires max;
add_header Cache-Control public;
# Some browsers still send conditional-GET requests if there's a
# Last-Modified header or an ETag header even if they haven't
# reached the expiry date sent in the Expires header.
add_header Last-Modified "";
add_header ETag "";
break;
}
# send non-static file requests to the app server
location / {
try_files $uri #rails;
}
location #rails {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://rails_app;
}
listen 80;
}

Is there a way to use nextjs with docker and nginx

I have a nextjs project that I wish to run using Docker and nginx.
I wish to use nginx that connects to nextjs behind the scenes (only nginx can talk to nextjs, user needs to talk to nginx to talk to nextjs).
Assuming it's standard nextjs project structure and the dockerfile content (provided below), Is there a way to use nginx in docker with nextjs?
I'm aware I can use Docker-compose. But I'd like to keep it under one docker image. Since I plan to push the image to heroku web hosting.
NOTE: I'm using Server Side Rendering
dockerfile
# Base on offical Node.js Alpine image
FROM node:latest as builder
# Set working directory
WORKDIR /usr/app
# install node-prune (https://github.com/tj/node-prune)
RUN curl -sfL https://install.goreleaser.com/github.com/tj/node-prune.sh | bash -s -- -b /usr/local/bin
# Copy package.json and package-lock.json before other files
# Utilise Docker cache to save re-installing dependencies if unchanged
COPY package.json ./
COPY yarn.lock ./
# Install dependencies
RUN yarn install --frozen-lockfile
# Copy all files
COPY ./ ./
# Build app
RUN yarn build
# remove development dependencies
RUN yarn install --production
# run node prune. Reduce node_modules size
RUN /usr/local/bin/node-prune
#######################################################
FROM node:alpine
WORKDIR /usr/app
# COPY package.json next.config.js .env* ./
# COPY --from=builder /usr/app/public ./public
COPY --from=builder /usr/app/.next ./.next
COPY --from=builder /usr/app/node_modules ./node_modules
EXPOSE 3000
CMD ["node_modules/.bin/next", "start"]
dockerfile inspired by https://github.com/vercel/next.js/blob/canary/examples/with-docker/Dockerfile.multistage
Edit: nginx default.conf
upstream nextjs_upstream {
server nextjs:3000;
# We could add additional servers here for load-balancing
}
server {
listen 80 default_server;
server_name _;
server_tokens off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
location / {
proxy_pass http://nextjs_upstream;
}
}
In order to be able to use Nginx and NextJS together in a single Docker container without using Docker-Compose, you need to use Supervisord
Supervisor is a client/server system that allows its users to control
a number of processes on UNIX-like operating systems.
The issue wasn't nginx config or the dockerfile. It was running both nginx and nextjs when starting the container. Since I couldn't find a way to run both, using supervisord was the tool I needed.
The following will be needed for it to work
Dockerfile
# Base on offical Node.js Alpine image
FROM node:latest as builder
# Set working directory
WORKDIR /usr/app
# Copy package.json and package-lock.json before other files
# Utilise Docker cache to save re-installing dependencies if unchanged
COPY package.json ./
COPY yarn.lock ./
# Install dependencies
RUN yarn install --frozen-lockfile
# Copy all files
COPY ./ ./
# Build app
RUN yarn build
# remove development dependencies
RUN yarn install --production
#######################################################
FROM nginx:alpine
WORKDIR /usr/app
RUN apk add nodejs-current npm supervisor
RUN mkdir mkdir -p /var/log/supervisor && mkdir -p /etc/supervisor/conf.d
# Remove any existing config files
RUN rm /etc/nginx/conf.d/*
# Copy nginx config files
# *.conf files in conf.d/ dir get included in main config
COPY ./.nginx/default.conf /etc/nginx/conf.d/
# COPY package.json next.config.js .env* ./
# COPY --from=builder /usr/app/public ./public
COPY --from=builder /usr/app/.next ./.next
COPY --from=builder /usr/app/node_modules ./node_modules
# supervisor base configuration
ADD supervisor.conf /etc/supervisor.conf
# replace $PORT in nginx config (provided by executior) and start supervisord (run nextjs and nginx)
CMD sed -i -e 's/$PORT/'"$PORT"'/g' /etc/nginx/conf.d/default.conf && \
supervisord -c /etc/supervisor.conf
supervisor.conf
[supervisord]
nodaemon=true
[program:nextjs]
directory=/usr/app
command=node_modules/.bin/next start
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=true
[program:nginx]
command=nginx -g 'daemon off;'
killasgroup=true
stopasgroup=true
redirect_stderr=true
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=true
nginx config (default.conf)
upstream nextjs_upstream {
server localhost:3000;
# We could add additional servers here for load-balancing
}
server {
listen $PORT default_server;
server_name _;
server_tokens off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
location / {
proxy_pass http://nextjs_upstream;
}
}
NOTE: using nginx as a reverse proxy. NextJS will be running on port 3000. The user won't be able to reach it directly. It has to go through nginx.
Building docker image
docker build -t nextjs-img -f ./dockerfile .
Running docker container
docker run --rm -e 'PORT=80' -p 8080:80 -d --name nextjs-img nextjs-img:latest
Go to localhost:8080
You can use docker-compose to run Nginx and your NextJS app in Docker container, then have a bridge network between those containers.
then in nginx conf:
server {
listen 80;
listen 443 ssl;
server_name localhost [dns].com;
ssl_certificate certs/cert.pem;
ssl_certificate_key certs/cert.key;
location / {
proxy_pass http://nextApplication; // name based on your docker-compose file
proxy_http_version 1.1;
proxy_read_timeout 90;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host [dns].com;
proxy_cache_bypass $http_upgrade;
}
}
I didn't use upstream script, loadbalancer is on top of nginx (at the cloud provider level)

Dockerize next.js and nginx at one Dockerfile

i've successfully dockerize my app using two docker image, one for nginx and second for the app and it runs well because i use docker compose.
Now i only want to have just one Dockerfile that contain app and nginx then run it on my local computer. How i could achieve that?
This is my nginx/default.conf
# Cache zone
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=STATIC:10m inactive=7d use_temp_path=off;
upstream nextjs {
server nextjs:3000;
}
server {
listen 80 default_server;
server_name _;
server_tokens off;
gzip on;
gzip_proxied any;
gzip_comp_level 4;
gzip_types text/css application/javascript image/svg+xml;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# BUILT ASSETS (E.G. JS BUNDLES)
# Browser cache - max cache headers from Next.js as build id in url
# Server cache - valid forever (cleared after cache "inactive" period)
location /_next/static {
proxy_cache STATIC;
proxy_pass http://nextjs;
}
# STATIC ASSETS (E.G. IMAGES)
# Browser cache - "no-cache" headers from Next.js as no build id in url
# Server cache - refresh regularly in case of changes
location /static {
proxy_cache STATIC;
proxy_ignore_headers Cache-Control;
proxy_cache_valid 60m;
proxy_pass http://nextjs;
}
# DYNAMIC ASSETS - NO CACHE
location / {
proxy_pass http://nextjs;
}
}
My /nginx/Dockerfile
FROM nginx:alpine as build
RUN rm /etc/nginx/conf.d/*
COPY ./default.conf /etc/nginx/conf.d/
EXPOSE 80
CMD [ "nginx", "-g", "daemon off;" ]
/Dockerfile [old]
FROM node:alpine
WORKDIR /usr/app
RUN npm install --global pm2
COPY ./package*.json ./
RUN npm install --production
COPY ./ ./
RUN npm run build
EXPOSE 3000
USER node
CMD [ "pm2-runtime", "start", "npm", "--", "start" ]
and this is the new Dockerfile
FROM node:alpine
WORKDIR /usr/app
RUN npm install --global pm2
COPY ./package*.json ./
RUN npm install --production
COPY ./ ./
# Build app
RUN npm run build
EXPOSE 3000
USER node
CMD [ "pm2-runtime", "start", "npm", "--", "start" ]
FROM nginx:stable-alpine
COPY --from=build /usr/app/.next /usr/share/nginx/html
RUN mkdir /usr/share/nginx/log
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/default.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I can build it, but whenever i run the image will throw error
host not found in upstream "nextjs:3000" in /etc/nginx/conf.d/default.conf:5
thanks to #octagon_octopus
I finally solve this problem by changing my nginx/default.conf
my Dockerfile
# build react app, it should be /build
# FROM node:12.2.0-alpine as build
FROM node:13-alpine as build
WORKDIR /app
COPY package.json /app/package.json
RUN npm install --only=prod
COPY . /app
RUN npm run build
# Creating nginx image and copy build folder from above
# FROM nginx:1.16.0-alpine
FROM nginx:stable-alpine
RUN mkdir /usr/share/nginx/buffer
COPY --from=build /app/.next /usr/share/nginx/buffer
COPY --from=build /app/deploy.sh /usr/share/nginx/buffer
RUN chmod +x /usr/share/nginx/buffer/deploy.sh
RUN cd /usr/share/nginx/buffer && ./deploy.sh
RUN mkdir /usr/share/nginx/log
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
and nginx/default.conf
server {
listen 80;
location / {
root /usr/share/nginx/html/pages;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html/pages;
}
error_log /usr/share/nginx/log/error.log warn;
access_log /usr/share/nginx/log/access.log;
}
The reason you get host not found in upstream "nextjs:3000" in /etc/nginx/conf.d/default.conf:5 is because according to your nginx/default.conf, nginx will forward all received requests to http://nextjs. This worked before, because you probably had node running in a separate container called nextjs. Now you try to run nginx and node in the same container, so the nextjs container does not exist anymore and nginx has nothing to forward requests to.
It seems to me that you are trying to run a reverse proxy and node application within the same container, when running them in two separate containers should be more desirable like you had it before.
If you are just developing your node app locally, you won't need the nginx reverse proxy and you can just send requests to the node app directly, so only the node container is needed. When you deploy to production, you typically use something like an nginx reverse proxy for various reasons like SSL termination and load balancing. In that case you can deploy the nginx and node containers together.
If you really want to continue with your current approach, then you will probably have to forward the requests to http://localhost instead of http://nextjs, although I don't think that will be the only problem. Node is probably not running within your container either. You start the Node application with CMD [ "pm2-runtime", "start", "npm", "--", "start" ] in a multi-stage docker build and that node image will be discarded. You will have to start your Node application inside the nginx container instead.

Resources