NextJS revalidate doesn't work with standalone server - docker

It looks like the revalidation option doesn't work when using the standalone server of NextJS.
I got this:
return {
props: {
page,
},
revalidate: 60,
};
I have the following NextJS config:
{
output: "standalone",
reactStrictMode: true,
swcMinify: true,
i18n: {
locales: ["default", "en", "nl", "fr"],
defaultLocale: "default",
localeDetection: false,
},
trailingSlash: true,
}
And I use the following docker file to create a container:
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
Tried many things myself but nothing works and on the internet I keep coming across that you have to run on a NextJS server, but if I'm correct, that is what you do when running it standalone.

Got into the issue more and found some more errors. Found out that the next/image component's default loader uses squoosh because it is quick to install and suitable for a development environment. But for a production environment using output: "standalone", you must install sharp.
So I runned
yarn add sharp
And that error was not happening anymore. But still I didn't get an update on my content. Because I could not get this working and I need to continue with the project I checked out On-Demand revalidation. Read more about it here:
https://nextjs.org/docs/basic-features/data-fetching/incremental-static-regeneration
So what I now did was:
create an API end-point in my NextJS solution to revalidate.
use the webhook from my CMS to call that api end-point
That did the trick for me. I could now just update something in my CMS, the webhook was triggered and then I got a new fresh build page with the changes!
In my eyes this is a beter solution because:
It only revalidate the page when it is really needed
After the change/revalidation the next visitor directly see's the new updated version
So i'm happy that it didn't worked and with an bit of extra efford I got a way better solution!
Hope this helps somebody.

Related

Authenticate private github dependency in a react app for dockerfile

I have a private repository that I use as a dependency in my frontend react app, currently when I download it I use a fine-grained PAT that allows me access to that github repository noted in the .env file as:
PERSONAL_ACCESS_TOKEN: blablabla
I understand that it is not safe to put the personal access token inside the ENV file as it could be used by anyone else, and since it is in the environment people can have access to it as well.
It still did not install the package when I did use it in the .env file and it gave me the following error:
fatal: could not read Username for 'https://github.com': terminal prompts disabled
What would be the best practises to allow for this token to be validated and used to install the private dependency when running Docker build to create the docker image?
My current Dockerfile is as follows:
# Build stage:
FROM node:14-alpine AS build
RUN apk update && apk upgrade && \
apk add --no-cache bash git openssh
# set working directory
WORKDIR /react
# install app dependencies
# (copy _just_ the package.json here so Docker layer caching works)
COPY ./package*.json yarn.lock ./
RUN yarn install --network-timeout 1000000
# build the application
COPY . .
RUN yarn build
# Final stage:
FROM node:14-alpine
# set working directory
WORKDIR /react
# get the build tree
COPY --from=build /react/build/ ./build/
# Install `serve` to run the application.
RUN npm install -g serve
EXPOSE 3000
# explain how to run the application
# ENTRYPOINT ["npx"]
CMD ["serve", "-s", "build"]
I use github actions to deploy the actual production version, would I need to simply set it up as a actions secret?

Dockerfile issue - Why is the binary dlv not being found - No such file or directory

I had a docker file that was working fine. However to remote debug it , I read that I needed to install dlv on it and then I need to run dlv and pass the parameter of the app I am trying to debug. So after installing dlv on it and attempting to run it. I get the error
exec /dlv: no such file or directory
This is the docker file
FROM golang:1.18-alpine AS builder
# Build Delve for debugging
RUN go install github.com/go-delve/delve/cmd/dlv#latest
# Create and change to the app directory.
WORKDIR /app
ENV CGO_ENABLED=0
# Retrieve application dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN go build -gcflags="all=-N -l" -o fooapp
# Use the official Debian slim image for a lean production container.
FROM debian:buster-slim
EXPOSE 8000 40000
RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Copy the binary to the production image from the builder stage.
#COPY --from=builder /app/fooapp /app/fooapp #commented this out
COPY --from=builder /go/bin/dlv /dlv
# Run dlv as pass fooapp as parameter
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/app/fooapp"]
The above results in exec /dlv: no such file or directory
I am not sure why this is happening. Being new to docker , I have tried different ways to debug it. I tried using dive to check and see if the image has dlv on it in the path /dlv and it does. I have also attached an image of it
You built dlv in alpine-based distro. dlv executable is linked against libc.musl:
# ldd dlv
linux-vdso.so.1 (0x00007ffcd251d000)
libc.musl-x86_64.so.1 => not found
But then you switched to glibc-based image debian:buster-slim. That image doesn't have the required libraries.
# find / -name libc.musl*
<nothing found>
That's why you can't execute dlv - the dynamic linker fails to find the proper lib.
You need to build in glibc-based docker. For example, replace the first line
FROM golang:bullseye AS builder
BTW. After you build you need to run the container in the priviledged mode
$ docker build . -t try-dlv
...
$ docker run --privileged --rm try-dlv
API server listening at: [::]:40000
2022-10-30T10:51:02Z warning layer=rpc Listening for remote connections (connections are not authenticated nor encrypted)
In non-priviledged container dlv is not allowed to spawn a child process.
$ docker run --rm try-dlv
API server listening at: [::]:40000
2022-10-30T10:55:46Z warning layer=rpc Listening for remote connections (connections are not authenticated nor encrypted)
could not launch process: fork/exec /app/fooapp: operation not permitted
Really Minimal Image
You use debian:buster-slim to minimize the image, it's size is 80 MB. But if you need a really small image, use busybox, it is only 4.86 MB overhead.
FROM golang:bullseye AS builder
# Build Delve for debugging
RUN go install github.com/go-delve/delve/cmd/dlv#latest
# Create and change to the app directory.
WORKDIR /app
ENV CGO_ENABLED=0
# Retrieve application dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN go build -o fooapp .
# Download certificates
RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates
# Use the official Debian slim image for a lean production container.
FROM busybox:glibc
EXPOSE 8000 40000
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/fooapp /app/fooapp
# COPY --from=builder /app/ /app
COPY --from=builder /go/bin/dlv /dlv
COPY --from=builder /etc/ssl /etc/ssl
# Run dlv as pass fooapp as parameter
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/app/fooapp"]
# ENTRYPOINT ["/bin/sh"]
The image size is 25 MB, of which 18 MB are from dlv and 2 MB are from Hello World application.
While choosing the images care should be taken to have the same flavors of libc. golang:bullseye links against glibc. Hence, the minimal image must be glibc-based.
But if you want a bit more comfort, use alpine with gcompat package installed. It is a reasonably rich linux with lots of external packages for just extra 6 MB compared to busybox.
FROM golang:bullseye AS builder
# Build Delve for debugging
RUN go install github.com/go-delve/delve/cmd/dlv#latest
# Create and change to the app directory.
WORKDIR /app
ENV CGO_ENABLED=0
# Copy local code to the container image.
COPY . ./
# Retrieve application dependencies.
RUN go mod tidy
# Build the binary.
RUN go build -o fooapp .
# Use alpine lean production container.
# FROM busybox:glibc
FROM alpine:latest
# gcompat is the package to glibc-based apps
# ca-certificates contains trusted TLS CA certs
# bash is just for the comfort, I hate /bin/sh
RUN apk add gcompat ca-certificates bash
EXPOSE 8000 40000
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/fooapp /app/fooapp
# COPY --from=builder /app/ /app
COPY --from=builder /go/bin/dlv /dlv
# Run dlv as pass fooapp as parameter
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/app/fooapp"]
# ENTRYPOINT ["/bin/bash"]
TL;DR
Run apt-get install musl, then /dlv should work as expected.
Explanation
Follow these steps:
docker run -it <image-name> sh
apt-get install file
file /dlv
Then you can see the following output:
/dlv: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-x86_64.so.1, Go BuildID=xV8RHgfpp-zlDlpElKQb/DOLzpvO_A6CJb7sj1Nxf/aCHlNjW4ruS1RXQUbuCC/JgrF83mgm55ntjRnBpHH, not stripped
The confusing no such file or directory (see this question for related discussions) is caused by the missing /lib/ld-musl-x86_64.so.1.
As a result, the solution is to install the musl library by following its documentation.
My answer is inspired by this answer.
The no such file or directory error indicates either your binary file does not exist, or your binary is dynamically linked to a library that does not exist.
As said in this answer, delve is linked against libc.musl. So for your delve build, you can disable CGO since that can result in dynamic links to libc/libmusl:
# Build Delve for debugging
RUN CGO_ENABLED=0 go install github.com/go-delve/delve/cmd/dlv#latest
...
This even allows you later to use a scratch build for your final target image and does not require you to install any additional packages like musl or use any glibc based image and require you to run in privileged mode.

Running multiple CMD commands at once

Wondering what I may be doing wrong, currently trying to revise this CMD to the proper format but it's not running right. The original w/ no edit is running good, but using the array version is not. Does combining commands not work in the proper format, or what may I be missing? Modified version when run immediately exits once it starts
Original:
CMD sshd & cd /app && npm start
Modified:
CMD ["sshd", "&", "cd", "/app", "&&", "npm", "start"]
My complete dockerfile:
FROM node:10-alpine
WORKDIR /app
COPY . /app
RUN npm install && npm cache clean --force
# CMD sshd & cd /app && npm start
# CMD ["sshd", "&", "cd", "/app", "&&", "npm", "start"]
You should:
Delete sshd: it's not installed in your image, it's unnecessary, and it's all but impossible to set up securely.
Delete the cd part, since the WORKDIR declaration above this has already switched into that directory.
Then your CMD is just a simple command.
FROM node:10-alpine
# note, no sshd, user accounts, host keys, ...
WORKDIR /app # does the same thing as `cd /app`
COPY . /app
RUN npm install && npm cache clean --force
CMD ["npm", "start"]
If you want to run multiple commands or attempt to launch an unmanaged background process, all of these things require a shell to run and you can't usefully use the CMD exec form. In the form you show in the question the main command is sshd only, and it takes 6 arguments including the literal strings & and &&.
Define a script and put all of your commands into this script.
Eg: I call this script is startup.sh
#!/bin/bash
sshd
cd /app
npm start
And call this script in CMD
COPY startup.sh /app/data/startup.sh
CMD ["/app/data/startup.sh"]

How run next.js with node-sharp for docker

I have a problem implementing sharp for Dockerfile.
Error: 'sharp' is required to be installed in standalone mode for the image
optimization to function correctly
Next.js with sharp works fine for local developing:
next 12.0.1
sharp 0.30.2
node 16.xx
npm 8.xx
OS - macOS Monterey - 12.2.1, M1 PRO
next.config.js
module.exports = {
experimental: {
outputStandalone: true,
},
}
Dockerfile:
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/next-i18next.config.js ./
COPY --from=builder /app/next-sitemap.js ./cd
COPY --from=builder /app/jsconfig.json ./jsconfig.json
COPY --from=builder /app/data/ ./data
COPY --from=builder /app/components/ ./components
COPY --from=builder /app/utils/ ./utils
COPY --from=builder /app/assets/ ./assets
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
.env file:
NEXT_SHARP_PATH=/tmp/node_modules/sharp next start
Sharp is installed in package.json
I checked both Next/Vercel tuts:
https://nextjs.org/docs/messages/install-sharp
https://nextjs.org/docs/messages/sharp-missing-in-production
RUN Docker:
docker build --no-cache . -t website-app && docker run --name website -p 3000:3000 website-app
I figured it out.
I removed NEXT_SHARP_PATH=/tmp/node_modules/sharp next start from .env
Local docker doesn't work with node-sharp without NEXT_SHARP_PATH and that is weird.
But I deployed it in my K8s Cluster with Docker and it works as is expected.
npm i sharp
this fix the problem for me
What works for me, is to change the version of Alpine Linux in the Dockerfile:
FROM node:18-alpine AS deps (instead of node:16)
Adding or removing node-sharp in my .env file didn't work for me.
For the final step (the one labeled "runner") in your Dockerfile, replace the base image with node:16-slim. This image is Debian-based, so it is approximately 20 MB larger than the alpine variant, but it has the binaries required to run sharp.
When using a similar Dockerfile to yours, I found that the NEXT_SHARP_PATH environment variable was not needed when using a Debian-based Node image.
And for reference, here is the NextJS documentation about the error message: https://nextjs.org/docs/messages/sharp-missing-in-production
Update: You may also be able to specify the libc implementation found in your base Docker image by using the following flags:
RUN npm_config_platform=linux npm_config_arch=x64 npm_config_libc=glibc npm ci
For more information about what these flags are, see the documentation for sharp.

Dockerize NextJS Application with Prisma

I have created a NextJS application, to connect to the database I use Prisma. When I start the application on my computer everything works. Unfortunately, I get error messages when I try to run the application in a Docker container. The container can be created and started. The start page of the application can also be shown (there are no database queries there yet). However, when I click on the first page where there is a database query I get error code 500 - Initial Server Error and the following error message in the console:
PrismaClientInitializationError: Unknown PRISMA_QUERY_ENGINE_LIBRARY undefined. Possible binaryTargets: darwin, darwin-arm64, debian-openssl-1.0.x, debian-openssl-1.1.x, rhel-openssl-1.0.x, rhel-openssl-1.1.x, linux-arm64-openssl-1.1.x, linux-arm64-openssl-1.0.x, linux-arm-openssl-1.1.x, linux-arm-openssl-1.0.x, linux-musl, linux-nixos, windows, freebsd11, freebsd12, openbsd, netbsd, arm, native or a path to the query engine library.
You may have to run prisma generate for your changes to take effect.
at cb (/usr/src/node_modules/#prisma/client/runtime/index.js:38689:17)
at async getServerSideProps (/usr/src/.next/server/pages/admin/admin.js:199:20)
at async Object.renderToHTML (/usr/src/node_modules/next/dist/server/render.js:428:24)
at async doRender (/usr/src/node_modules/next/dist/server/next-server.js:1144:38)
at async /usr/src/node_modules/next/dist/server/next-server.js:1236:28
at async /usr/src/node_modules/next/dist/server/response-cache.js:64:36 {
clientVersion: '3.6.0',
errorCode: undefined
}
My Dockerfile:
# Dockerfile
# base image
FROM node:16-alpine3.12
# create & set working directory
RUN mkdir -p /usr/src
WORKDIR /usr/src
# copy source files
COPY . /usr/src
COPY package*.json ./
COPY prisma ./prisma/
# install dependencies
RUN npm install
COPY . .
# start app
RUN npm run build
EXPOSE 3000
CMD npm run start
My docker-compose.yaml:
version: "3"
services:
web:
build:
context: .
dockerfile: Dockerfile
container_name: web
restart: always
volumes:
- ./:/usr/src/app
ports:
- "3000:3000"
env_file:
- .env
My package.json:
{
"name": "supermarket",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start",
"lint": "next lint"
},
"prisma": {
"schema": "prisma/schema.prisma"
},
"dependencies": {
"#prisma/client": "^3.6.0",
"axios": "^0.22.0",
"cookie": "^0.4.1",
"next": "latest",
"nodemailer": "^6.6.5",
"react": "17.0.2",
"react-cookie": "^4.1.1",
"react-dom": "17.0.2"
},
"devDependencies": {
"eslint": "7.32.0",
"eslint-config-next": "11.1.2",
"prisma": "^3.6.0"
}
}
I've found the error. I think it's a problem with the M1 Chip.
I changed node:16-alpine3.12 to node:lts and added some commands to the Dockerfile which looks like this now:
# base image
FROM node:lts
# create & set working directory
RUN mkdir -p /usr/src
WORKDIR /usr/src
# copy source files
COPY . /usr/src
COPY package*.json ./
COPY prisma ./prisma/
RUN apt-get -qy update && apt-get -qy install openssl
# install dependencies
RUN npm install
RUN npm install #prisma/client
COPY . .
RUN npx prisma generate --schema ./prisma/schema.prisma
# start app
RUN npm run build
EXPOSE 3000
CMD npm run start
I hope this can also help other people 😊
I have been having a similar issue, which I have just solved.
I think what you need to do is change the last block in your docker file to this
# start app
RUN npm run build
RUN npx prism generate
EXPOSE 3000
CMD npm run start
I think that will solve your issue.
I've found this solution with some workarounds:
https://gist.github.com/malteneuss/a7fafae22ea81e778654f72c16fe58d3
In short:
# Dockerfile
...
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npx prisma generate # <---important to support Prisma query engine in Alpine Linux in final image
RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
...
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
COPY --chown=nextjs:nodejs prisma ./prisma/ # <---important to support Prisma DB migrations in bootstrap.sh
COPY --chown=nextjs:nodejs bootstrap.sh ./
...
CMD ["./bootstrap.sh"]
This Dockerfile is based on the official Nextjs with Docker example project and adapted to include Prisma. To run migrations on app start we can add a bash script that does so:
# bootstrap.sh
#!/bin/sh
# Run migrations
DATABASE_URL="postgres://postgres:postgres#db:5432/appdb?sslmode=disable" npx prisma migrate deploy
# start app
DATABASE_URL="postgres://postgres:postgres#db:5432/workler?sslmode=disable" node server.js
Unfortunately, we need to explicitly set the DATABASE_URL here, otherwise migrations don't work, because Prisma can't find the environment variable (e.g. from a docker-compose file).
And last but not least, because Alpine Linux base image uses a Musl C-library, the Prisma client has to be compiled in the builder image against that. So, to get the correct version, we need to add this info to Prisma's schema.prisma file:
# schema.prisma
generator client {
provider = "prisma-client-js"
binaryTargets = ["native", "linux-musl"] # <---- important to support Prisma Query engine in Alpine linux, otherwise "PrismaClientInitializationError2 [PrismaClientInitializationError]: Query engine binary for current platform "linux-musl" could not be found."
}
I had a luck this way:
FROM node:17-slim as dependencies
# set working directory
WORKDIR /usr/src/app
# Copy package and lockfile
COPY package.json ./
COPY yarn.lock ./
COPY prisma ./prisma/
RUN apt-get -qy update && apt-get -qy install openssl
# install dependencies
RUN yarn --frozen-lockfile
COPY . .
# ---- Build ----
FROM dependencies as build
# install all dependencies
# build project
RUN yarn build
# ---- Release ----
FROM dependencies as release
# copy build
COPY --from=build /usr/src/app/.next ./.next
COPY --from=build /usr/src/app/public ./public
# dont run as root
USER node
# expose and set port number to 3000
EXPOSE 3000
ENV PORT 3000
# enable run as production
ENV NODE_ENV=production
# start app
CMD ["yarn", "start"]

Resources