Nginx cannot find fullchain.pem from Letsencrypt certificate - docker

I generate certificates using certbot certonly for my app deployed on Ubuntu 22.04 (AWS EC2 instance). But when it comes to deploy nginx using Docker, I have an error :
2023/01/12 14:09:58 [emerg] 6#6: cannot load certificate "/etc/letsencrypt/live/api.melodymaster.io/fullchain.pem":
BIO_new_file() failed
(SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/api.melodymaster.io/fullchain.pem','r')
error:2006D080:BIO routines:BIO_new_file:no such file)
whereas the certificates are on the machine :
cat /etc/letsencrypt/live/api.melodymaster.io/fullchain.pem
-----BEGIN CERTIFICATE-----
MII...
I already added all permissions to this file.
Here is my docker-compose.yml for nginx :
nginx:
image: docker.pkg.github.com/thomasroudil/melodymaster/melodymaster_dispatcher:${VERSION}
build:
context: ../dispatcher
dockerfile: ./Dockerfile
volumes:
- '/etc/letsencrypt:/etc/letsencrypt'
network_mode: host
and my Dockerfile :
FROM nginx:1.17.0
RUN apt-get update && apt-get install -y \
build-essential \
curl \
&& rm -rf /var/lib/apt/lists/*
COPY nginx.conf /etc/nginx/nginx.conf
COPY mime.types /etc/nginx/mime.types
COPY conf.d /etc/nginx/conf.d
CMD nginx -g "daemon off;"
with this .conf file :
server {
listen 80;
server_name api.melodymaster.io;
return 301 https://api.melodymaster.io$request_uri;
}
server {
listen 443 ssl;
server_name api.melodymaster.io;
ssl_certificate /etc/letsencrypt/live/api.melodymaster.io/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.melodymaster.io/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
location ^~ / {
proxy_pass http://127.0.0.1:4067$request_uri;
}
}
Do you have any hints about why this error occurs ?

Related

fast api and nginx in docker-compose shows connect() failed (111: Connection refused) while connecting to upstream, client: 172.27.0.1, server:

fastapi nginx template
source code
it's fast api and nginx template but currently it does not work as i expected.
when you curl localhost, it responses 502 bad gateway instaed of hellow world which is defined in main.py ...
docker logs web shows connect() failed (111: Connection refused) while connecting to upstream, client: 172.27.0.1, server: in nginx container.
but why ?? how can you fix this error ?
file tree
.
├── Dockerfile
├── README.md
├── docker-compose.yml
├── main.py
├── nginx
│   ├── Dockerfile
│   └── nginx.conf
└── requirements.txt
build and start
docker-compose up -d --build
docker logs ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb79d9efaf75 fast-api-nginx-template_web "/docker-entrypoint.…" 4 minutes ago Up 4 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp web
6b049c395508 fast-api-nginx-template_api "uvicorn main:app --…" 4 minutes ago Up 4 minutes 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp api
curl localhost
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.21.1</center>
</body>
</html>
docker logs web
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/08/28 13:58:37 [error] 31#31: *8 connect() failed (111: Connection refused) while connecting to upstream, client: 172.27.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://172.27.0.3:8000/", host: "localhost"
172.27.0.1 - - [28/Aug/2021:13:58:37 +0000] "GET / HTTP/1.1" 502 157 "-" "curl/7.64.1"
docker logs api
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
docker-compose
docker-compose
version: "3.9"
services:
web:
container_name: web
build: nginx
ports:
- 80:80
depends_on:
- api
networks:
- local-net
api:
container_name: api
build: .
ports:
- 8000:8000
networks:
- local-net
expose:
- 8000
networks:
local-net:
driver: bridge
web
nginx/Dockerfile
FROM nginx
RUN apt-get update
COPY nginx.conf /etc/nginx/nginx.conf
nginx/nginx.conf
worker_processes 1;
events {
worker_connections 1024;
accept_mutex off;
use epoll;
}
http {
include mime.types;
upstream app_serve {
server web:8000;
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server {
listen 80 ipv6only=on;
server_name 127.0.0.1;
location / {
proxy_pass http://app_serve;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
api
Dockerfile
ARG BASE_IMAGE=python:3.8-buster
FROM $BASE_IMAGE
RUN apt-get -y update && \
apt-get install -y --no-install-recommends \
build-essential \
openssl libssl-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ARG USER_NAME=app
ARG USER_UID=1000
ARG PASSWD=password
RUN useradd -m -s /bin/bash -u $USER_UID $USER_NAME && \
gpasswd -a $USER_NAME sudo && \
echo "${USER_NAME}:${PASSWD}" | chpasswd && \
echo "${USER_NAME} ALL=(ALL) ALL" >> /etc/sudoers
COPY ./ /app
RUN chown -R ${USER_NAME}:${USER_NAME} /app
USER $USER_NAME
WORKDIR /app
ENV PATH $PATH:/home/${USER_NAME}/.local/bin
RUN pip3 install --user --upgrade pip
RUN pip3 install --user -r requirements.txt
RUN rm -rf ~/.cache/pip/*
EXPOSE 8000
# Execute
CMD ["uvicorn", "main:app", "--host", "127.0.0.1" ,"--port" ,"8000"]
main.py
from typing import Optional
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def load_root():
return {"Hello": "World"}
#app.get("/items/{item_id}")
def load_item(item_id: int, q: Optional[str] = None):
return {"item_id": item_id, "q": q}
At quick glance, I think it's because you've bound uvicorn to 127.0.0.1, therefore, you'd need an additional reverse proxy in api container in order to serve it outside (unless your container runs in a host network, but by default it's a bridge).
Binding uvicorn to 0.0.0.0 should fix this issue.
Being more specific:
CMD ["uvicorn", "main:app", "--host", "0.0.0.0" ,"--port" ,"8000"]
To confirm this, I've run my fastapi container in both 0.0.0.0 and 127.0.0.1:
$ podman ps --no-trunc
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7121420b401ee1803474ff156280a5b5b154c55ae0fd47e1976d8fe9e5331c72 localhost/fastapi-mvc-template:test /usr/bin/fastapi serve --host 127.0.0.1 4 minutes ago Up 4 minutes ago 0.0.0.0:8000->8000/tcp test
dd35d89f161f818e5b34c138bd3d42bdb4d84100b76999d4fc2444c7d0d1a1d9 localhost/fastapi-mvc-template:0.1.0 /usr/bin/fastapi serve --host 0.0.0.0 2 minutes ago Up 2 minutes ago 0.0.0.0:9000->8000/tcp test_ok
$ curl localhost:8000/api/ready
curl: (56) Recv failure: Connection reset by peer
$ curl localhost:9000/api/ready
{"status":"ok"}
The upstream app_serve section in nginx.conf refers to the web container on port 8000, but seeing the docker-compose file, this is the api container which exposes port 8000.
So, IMHO the right NGINX config would rather be:
# ...
upstream app_serve {
server api:8000;
}
# ...

Content-Type problem with Rails and NIGNX

I have a dockerized Ruby-on-Rails application which also uses NGINX.
When I build the containers (using docker-compose build && docker-compose up) the containers start, but the CSS and JavaScript files can't be loaded, because their Conten-Type is text/html.
What did I configure wrong? I tried several other solutions in the nginx.conf but nothing worked.
My configuration:
docker-compose.yml
version: '3'
services:
app:
container_name: trainee_manager_app
build:
context: .
dockerfile: ./docker/app/Dockerfile
environment:
- DB_USERNAME=postgres
- DB_PASSWORD=postgres
- DB_PORT=5432
depends_on:
- db
db:
image: postgres:13
container_name: trainee_manager_db
volumes:
- pg_data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=admin
server:
container_name: trainee_manager_server
build:
context: .
dockerfile: ./docker/web/Dockerfile
depends_on:
- app
ports:
- 86:80
networks:
default:
external:
name: trainee-manager-network
volumes:
pg_data:
app/Dockerfile:
FROM ruby:2.7.3
RUN apt-get update -qq
RUN apt-get install -y make autoconf libtool make gcc perl gettext gperf && git clone https://github.com/FreeTDS/freetds.git && cd freetds && sh ./autogen.sh && make && make install
# for postgres
RUN apt-get install -y libpq-dev
# for nokogiri
RUN apt-get install -y libxml2-dev libxslt1-dev
# for a JS runtime
RUN apt-get install -y nodejs
# Setting an Envioronment-Variable for the Rails App
ENV RAILS_ROOT /var/www/trainee_manager
RUN mkdir -p $RAILS_ROOT
# Setting the working directory
WORKDIR $RAILS_ROOT
# Setting up the Environment
ENV RAILS_ENV='production'
ENV RACK_ENV='production'
# Adding the Gems
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN bundle install --jobs 20 --retry 5 --without development test
# Adding all Project files
COPY . .
RUN bundle exec rake assets:clobber
RUN bundle exec rake assets:precompile
RUN ["chmod", "+x", "docker/app/entrypoint.sh"]
ENTRYPOINT ["docker/app/entrypoint.sh"]
EXPOSE 3000
CMD ["bundle", "exec", "puma", "-p", "3000"]
app/entrypoint.sh:
#!/bin/bash
# Remove a potentially pre-existing server.pid for Rails.
rm -f /app/tmp/pids/server.pid
bundle exec rake db:create RAILS_ENV=production
bundle exec rake db:migrate RAILS_ENV=production
bundle exec rake db:seed RAILS_ENV=production
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$#"
web/Dockerfile:
# Base Image
FROM nginx
# Dependiencies
RUN apt-get update -qq && apt-get -y install apache2-utils
# Establish where Nginx should look for files
ENV RAILS_ROOT /var/www/trainee_manager
# Working Directory
WORKDIR $RAILS_ROOT
# Creating the Log-Directory
RUN mkdir log
# Copy static assets
COPY public public/
# Copy the NGINX Config-Template
COPY docker/web/nginx.conf /tmp/docker.nginx
# substitute variable references in the Nginx config template for real values from the environment
# put the final config in its place
RUN envsubst '$RAILS_ROOT' < /tmp/docker.nginx > /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
web/nginx.conf:
upstream rails_app {
server app:3000;
}
server {
# Defining the Domain
server_name <SERVERNAME>;
client_max_body_size 200m;
# Define the public application root
root $RAILS_ROOT/public;
index index.html;
# define where Nginx should write its logs
access_log $RAILS_ROOT/log/nginx.access.log;
error_log $RAILS_ROOT/log/nginx.error.log;
# deny requests for files that should never be accessed
location ~ /\. {
deny all;
}
location ~* ^.+\.(rb|log)$ {
deny all;
}
# serve static (compiled) assets directly if they exist (for rails production)
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
try_files $uri #rails;
access_log off;
gzip_static on; # to serve pre-gzipped version
expires max;
add_header Cache-Control public;
# Some browsers still send conditional-GET requests if there's a
# Last-Modified header or an ETag header even if they haven't
# reached the expiry date sent in the Expires header.
add_header Last-Modified "";
add_header ETag "";
break;
}
# send non-static file requests to the app server
location / {
try_files $uri #rails;
}
location #rails {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://rails_app;
}
listen 80;
}

how do I launch an nginx docker container with a site that is copied to the container?

I wrote such a docker file, run the container and open localhost, opens nginx, although the site should open from the /var/www/html folder . How to solve the problem?
FROM nginx
RUN apt-get update && apt-get -y install zip
WORKDIR /02_Continuous_Delivery/html
COPY . /var/www/html
RUN rm -f /var/www/html/site.zip; zip -r /var/www/html/site.zip /02_Continuous_Delivery/html
EXPOSE 80
Your problem is that by default nginx image provides config (/etc/nginx/conf.d/default.conf) like this:
server {
listen 80;
listen [::]:80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
So, you should either copy your site to /usr/share/nginx/html directory or provide your custom config and set there location root as /var/www/html directory.
For second solution you can create file default.conf with content like:
server {
listen 80;
listen [::]:80;
server_name localhost;
location / {
root /var/www/html;
index index.html index.htm;
}
}
and copy it to /etc/nginx/conf.d/ directory in your Dockerfile
COPY default.conf /etc/nginx/conf.d/
i have a solution
FROM nginx
RUN apt-get update && apt-get -y install zip
COPY 02_Continuous_Delivery/html /usr/share/nginx/html
RUN zip -r /usr/share/nginx/html/site.zip /usr/share/nginx/html
EXPOSE 80
#CMD ["nginx","-g","daemon off;"]

Nginx, Certbot & Docker Compose: /etc/nginx/user.conf.d/*.conf: No such file or directory

I'm running a Ruby on Rails web application using docker and docker compose. I had 3 containers functioning on the ip address at port 3000. I am now trying to set this up on the ip address/domain name rather than port 3000. To do this, I am trying to use nginx as a proxy server with this image (https://hub.docker.com/r/staticfloat/nginx-certbot/) so that I can also have an SSL cert.
My issue is that I still can't access the application from the ip address without port 3000. Also, it can only be accessed with http rather than https.
I'm receiving the following output from the nginx container when I run 'docker-compose up':
frontend_1 | templating scripts from /etc/nginx/user.conf.d to /etc/nginx/conf.d
frontend_1 | Substituting variables
frontend_1 | -> /etc/nginx/user.conf.d/*.conf
frontend_1 | /scripts/util.sh: line 125: /etc/nginx/user.conf.d/*.conf: No such file or directory
frontend_1 | Done with startup
frontend_1 | Run certbot
frontend_1 | ++ parse_domains
frontend_1 | ++ for conf_file in /etc/nginx/conf.d/*.conf*
frontend_1 | ++ xargs echo
frontend_1 | ++ sed -n -r -e 's&^\s*ssl_certificate_key\s*\/etc/letsencrypt/live/(.*)/privkey.pem;\s*(#.*)?$&\1&p' /etc/nginx/conf.d/certbot.conf
frontend_1 | + auto_enable_configs
frontend_1 | + for conf_file in /etc/nginx/conf.d/*.conf*
frontend_1 | + keyfiles_exist /etc/nginx/conf.d/certbot.conf
frontend_1 | ++ parse_keyfiles /etc/nginx/conf.d/certbot.conf
frontend_1 | ++ sed -n -e 's&^\s*ssl_certificate_key\s*\(.*\);&\1&p' /etc/nginx/conf.d/certbot.conf
frontend_1 | + return 0
frontend_1 | + '[' conf = nokey ']'
frontend_1 | + set +x
I think that the below output relates to my issue. However, I still haven't been able to figure this out.
/scripts/util.sh: line 125: /etc/nginx/user.conf.d/*.conf: No such file or directory
I have two .conf files which are both located at myapp/config/nginx/user.conf.d/
Here are the two .conf files:
upstream docker {
server web:3000 fail_timeout=0;
}
server {
listen 443 ssl;
server_name myapp.com;
ssl_certificate /etc/letsencrypt/live/myapp.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.com/privkey.pem;
try_files $uri/index.html $uri #docker;
client_max_body_size 4G;
location #docker {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://docker;
}
}
and
upstream docker {
server web:3000 fail_timeout=0;
}
server {
listen 443 ssl;
server_name myapp.ie;
ssl_certificate /etc/letsencrypt/live/myapp.ie/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.ie/privkey.pem;
try_files $uri/index.html $uri #docker;
client_max_body_size 4G;
location #docker {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://docker;
}
}
Here's my dockerfile:
# Use the Ruby 2.7.2 image from Docker Hub as the base image (https://hub.docker.com/_/ruby)
FROM ruby:2.7.2-buster
# The directory to store this application's files.
RUN mkdir /myapp
RUN mkdir -p /usr/local/nvm
WORKDIR /myapp
# Install 3rd party dependencies.
RUN apt-get update -qq && \
apt-get install -y curl \
build-essential \
libpq-dev \
postgresql \
postgresql-contrib \
postgresql-client
# # The directory to store this application's files.
# RUN mkdir /myapp
# RUN mkdir -p /usr/local/nvm
# WORKDIR /myapp
RUN curl -sL https://deb.nodesource.com/setup_15.x | bash -
RUN apt-get install -y nodejs
RUN node -v
RUN npm -v
# Copy Gems.
COPY Gemfile Gemfile.lock package.json yarn.lock ./
# Run bundle install to install the Ruby dependencies.
RUN gem install bundler && bundle update --bundler && bundle install
RUN npm install -g yarn && yarn install --check-files
# Copy all the application's files into the /myapp directory.
COPY . /myapp
# Compile assets
ENV RAILS_ENV production
ENV RAILS_SERVE_STATIC_FILES true
RUN bundle exec rake assets:precompile
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process by setting "rails server -b 0.0.0.0" as the command to run when this container starts.
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
Here's my entrypoint.sh file:
#!/bin/bash
set -e
# For development check if the gems as installed, if not, then uninsstall them.
if ! [ bundle check ] ; then
bundle install
fi
# Remove a potentially pre-existing server.pid for Rails.
rm -f /myapp/tmp/pids/server.pid
# # Yarn - Check Files.
yarn install --check-files
# Run the command - runs any arguments passed into this entrypoint file.
exec "$#"
Here's my docker-compose.yml ile:
version: "3.8"
services:
web:
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
volumes:
- bundle-volume:/usr/local/bundle
ports:
- "3000:3000"
depends_on:
- database
- elasticsearch
environment:
RAILS_ENV: production
DATABASE_NAME: myapp_production
DATABASE_USER: postgres
DATABASE_PASSWORD: **********
POSTGRES_PASSWORD: **********
DATABASE_HOST: database
ELASTICSEARCH_URL: http://elasticsearch:9200
database:
restart: unless-stopped
image: postgres:12.3
container_name: database
volumes:
- db_volume:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- 5432:5432
environment:
DATABASE_PASSWORD: **********
POSTGRES_PASSWORD: **********
elasticsearch:
restart: unless-stopped
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.3
volumes:
- ./docker_data/elasticsearch/data:/usr/share/elasticsearch/data
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
ports:
- 9200:9200
ulimits:
memlock:
soft: -1
hard: -1
frontend:
restart: unless-stopped
image: staticfloat/nginx-certbot
ports:
- 80:80/tcp
- 443:443/tcp
depends_on:
- web
environment:
CERTBOT_EMAIL: myapp#gmail.com
volumes:
- /etc/nginx/user.conf.d:/etc/nginx/user.conf.d:ro
- letsencrypt:/etc/letsencrypt
volumes:
bundle-volume:
external: false
db_volume:
data:
letsencrypt:
external: false
Appreciate any help.
As you mention you have two .conf files which are both located at myapp/config/nginx/user.conf.d/.
Please move these both files to the to '/etc/nginx/user.conf.d', this directory as I can see you have mounted this directory to the docker. After moving these files to the above location bring down the docker and bring up then see if it resolves the issue. Please let me know if I can help more with this.

Is there a way to use nextjs with docker and nginx

I have a nextjs project that I wish to run using Docker and nginx.
I wish to use nginx that connects to nextjs behind the scenes (only nginx can talk to nextjs, user needs to talk to nginx to talk to nextjs).
Assuming it's standard nextjs project structure and the dockerfile content (provided below), Is there a way to use nginx in docker with nextjs?
I'm aware I can use Docker-compose. But I'd like to keep it under one docker image. Since I plan to push the image to heroku web hosting.
NOTE: I'm using Server Side Rendering
dockerfile
# Base on offical Node.js Alpine image
FROM node:latest as builder
# Set working directory
WORKDIR /usr/app
# install node-prune (https://github.com/tj/node-prune)
RUN curl -sfL https://install.goreleaser.com/github.com/tj/node-prune.sh | bash -s -- -b /usr/local/bin
# Copy package.json and package-lock.json before other files
# Utilise Docker cache to save re-installing dependencies if unchanged
COPY package.json ./
COPY yarn.lock ./
# Install dependencies
RUN yarn install --frozen-lockfile
# Copy all files
COPY ./ ./
# Build app
RUN yarn build
# remove development dependencies
RUN yarn install --production
# run node prune. Reduce node_modules size
RUN /usr/local/bin/node-prune
#######################################################
FROM node:alpine
WORKDIR /usr/app
# COPY package.json next.config.js .env* ./
# COPY --from=builder /usr/app/public ./public
COPY --from=builder /usr/app/.next ./.next
COPY --from=builder /usr/app/node_modules ./node_modules
EXPOSE 3000
CMD ["node_modules/.bin/next", "start"]
dockerfile inspired by https://github.com/vercel/next.js/blob/canary/examples/with-docker/Dockerfile.multistage
Edit: nginx default.conf
upstream nextjs_upstream {
server nextjs:3000;
# We could add additional servers here for load-balancing
}
server {
listen 80 default_server;
server_name _;
server_tokens off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
location / {
proxy_pass http://nextjs_upstream;
}
}
In order to be able to use Nginx and NextJS together in a single Docker container without using Docker-Compose, you need to use Supervisord
Supervisor is a client/server system that allows its users to control
a number of processes on UNIX-like operating systems.
The issue wasn't nginx config or the dockerfile. It was running both nginx and nextjs when starting the container. Since I couldn't find a way to run both, using supervisord was the tool I needed.
The following will be needed for it to work
Dockerfile
# Base on offical Node.js Alpine image
FROM node:latest as builder
# Set working directory
WORKDIR /usr/app
# Copy package.json and package-lock.json before other files
# Utilise Docker cache to save re-installing dependencies if unchanged
COPY package.json ./
COPY yarn.lock ./
# Install dependencies
RUN yarn install --frozen-lockfile
# Copy all files
COPY ./ ./
# Build app
RUN yarn build
# remove development dependencies
RUN yarn install --production
#######################################################
FROM nginx:alpine
WORKDIR /usr/app
RUN apk add nodejs-current npm supervisor
RUN mkdir mkdir -p /var/log/supervisor && mkdir -p /etc/supervisor/conf.d
# Remove any existing config files
RUN rm /etc/nginx/conf.d/*
# Copy nginx config files
# *.conf files in conf.d/ dir get included in main config
COPY ./.nginx/default.conf /etc/nginx/conf.d/
# COPY package.json next.config.js .env* ./
# COPY --from=builder /usr/app/public ./public
COPY --from=builder /usr/app/.next ./.next
COPY --from=builder /usr/app/node_modules ./node_modules
# supervisor base configuration
ADD supervisor.conf /etc/supervisor.conf
# replace $PORT in nginx config (provided by executior) and start supervisord (run nextjs and nginx)
CMD sed -i -e 's/$PORT/'"$PORT"'/g' /etc/nginx/conf.d/default.conf && \
supervisord -c /etc/supervisor.conf
supervisor.conf
[supervisord]
nodaemon=true
[program:nextjs]
directory=/usr/app
command=node_modules/.bin/next start
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=true
[program:nginx]
command=nginx -g 'daemon off;'
killasgroup=true
stopasgroup=true
redirect_stderr=true
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=true
nginx config (default.conf)
upstream nextjs_upstream {
server localhost:3000;
# We could add additional servers here for load-balancing
}
server {
listen $PORT default_server;
server_name _;
server_tokens off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
location / {
proxy_pass http://nextjs_upstream;
}
}
NOTE: using nginx as a reverse proxy. NextJS will be running on port 3000. The user won't be able to reach it directly. It has to go through nginx.
Building docker image
docker build -t nextjs-img -f ./dockerfile .
Running docker container
docker run --rm -e 'PORT=80' -p 8080:80 -d --name nextjs-img nextjs-img:latest
Go to localhost:8080
You can use docker-compose to run Nginx and your NextJS app in Docker container, then have a bridge network between those containers.
then in nginx conf:
server {
listen 80;
listen 443 ssl;
server_name localhost [dns].com;
ssl_certificate certs/cert.pem;
ssl_certificate_key certs/cert.key;
location / {
proxy_pass http://nextApplication; // name based on your docker-compose file
proxy_http_version 1.1;
proxy_read_timeout 90;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host [dns].com;
proxy_cache_bypass $http_upgrade;
}
}
I didn't use upstream script, loadbalancer is on top of nginx (at the cloud provider level)

Resources