How run next.js with node-sharp for docker - docker

I have a problem implementing sharp for Dockerfile.
Error: 'sharp' is required to be installed in standalone mode for the image
optimization to function correctly
Next.js with sharp works fine for local developing:
next 12.0.1
sharp 0.30.2
node 16.xx
npm 8.xx
OS - macOS Monterey - 12.2.1, M1 PRO
next.config.js
module.exports = {
experimental: {
outputStandalone: true,
},
}
Dockerfile:
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/next-i18next.config.js ./
COPY --from=builder /app/next-sitemap.js ./cd
COPY --from=builder /app/jsconfig.json ./jsconfig.json
COPY --from=builder /app/data/ ./data
COPY --from=builder /app/components/ ./components
COPY --from=builder /app/utils/ ./utils
COPY --from=builder /app/assets/ ./assets
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
.env file:
NEXT_SHARP_PATH=/tmp/node_modules/sharp next start
Sharp is installed in package.json
I checked both Next/Vercel tuts:
https://nextjs.org/docs/messages/install-sharp
https://nextjs.org/docs/messages/sharp-missing-in-production
RUN Docker:
docker build --no-cache . -t website-app && docker run --name website -p 3000:3000 website-app

I figured it out.
I removed NEXT_SHARP_PATH=/tmp/node_modules/sharp next start from .env
Local docker doesn't work with node-sharp without NEXT_SHARP_PATH and that is weird.
But I deployed it in my K8s Cluster with Docker and it works as is expected.

npm i sharp
this fix the problem for me

What works for me, is to change the version of Alpine Linux in the Dockerfile:
FROM node:18-alpine AS deps (instead of node:16)
Adding or removing node-sharp in my .env file didn't work for me.

For the final step (the one labeled "runner") in your Dockerfile, replace the base image with node:16-slim. This image is Debian-based, so it is approximately 20 MB larger than the alpine variant, but it has the binaries required to run sharp.
When using a similar Dockerfile to yours, I found that the NEXT_SHARP_PATH environment variable was not needed when using a Debian-based Node image.
And for reference, here is the NextJS documentation about the error message: https://nextjs.org/docs/messages/sharp-missing-in-production
Update: You may also be able to specify the libc implementation found in your base Docker image by using the following flags:
RUN npm_config_platform=linux npm_config_arch=x64 npm_config_libc=glibc npm ci
For more information about what these flags are, see the documentation for sharp.

Related

Dockerfile copy from build failing for create-react-app

I have a react app I'm trying to dockerize for production. It was based off create-react-app. To run the app locally, I am in the app's root folder and I run npm start. This works. I built the app with npm run build. Then I try to create the docker image with docker build . -t app-name. This is failing for not being able to find the folder I'm trying to copy the built app from (I think).
Here's what's in my Dockerfile:
FROM node:13.12.0-alpine as build
WORKDIR /src
ENV PATH /node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
COPY . ./
RUN npm run build
FROM nginx:alpine
COPY --from=build build /usr/share/nginx/html
EXPOSE 80
CMD ["npm", "start"]
I'm pretty sure I've got something wrong on the COPY --from line.
The app structure is like this, if it matters
-app-name (folder)
-src (folder)
-build (folder)
-dockerfile
-other stuff, but I think I listed what matters
The error I get is failed to compute cache key: "/build" not found: not found
I'm running my commands in windows powershell.
What do I need to change?
You were almost correct,
Just that the path where the build folder is generated is at /src/build and not at /build.
and hence the error you see,
and why the /src coming?
it's due to the WORKDIR /src.
and hence this should work: COPY --from=build /src/build /usr/share/nginx/html
besides, since you are using nginx server to serve the build static files,
you don't need to or you cant run npm start with CMD.
instead, just leave it, and you can access the application at port 80.
so the possible working Dockerfile would be:
FROM node:13.12.0-alpine as build
WORKDIR /src
ENV PATH /node_modules/.bin:$PATH
COPY package*.json ./
RUN npm install --silent
COPY . ./
RUN npm run build
FROM nginx:alpine
COPY --from=build /src/build /usr/share/nginx/html
EXPOSE 80
This is in accordance with the Dockerfile in the above question,
in some specific cases, advanced configuration might be required.

Docker folder gets duplicated in container

Have this Dockerfile
Dockerfile
FROM python:3.8
ADD Pipfile.lock /app/Pipfile.lock
ADD Pipfile /app/Pipfile
WORKDIR /app
COPY . /app
RUN pip install pipenv
RUN pipenv install --system --deploy --ignore-pipfile
ENV FLASK_APP=app/http/api/endpoints.py
ENV FLASK_RUN_PORT=4433
ENV FLASK_ENV=development
ENTRYPOINT ["python"]
CMD ["-m", "flask", "run"]
Why in the Docker container my app lands in
/app/app/http/api
App gets duplicated
How to copy it to:
/app/http/api
How can I fix it?
Update 1:
My docker-compose and Dockerfile and ls output.
https://0bin.net/paste/ePRws72M#IV8H6iRJ+UMJFr8lB7CRQYKwsnGYflsmOlyvFxZA7zE
You got two problems here
copy . /app copies the whole working directory from the host machine into the /app directory in the image. If you just want the ./app subdirectory change that to copy ./app /app
Your docker-compose mounts the current working directory of the host machine into the container under the mount point /app . This hides the existing directory /app of the image. So you will either need to copy your app to a different directory or use a different mount point for your volume

Flask and React App in single Docker Container

Good day SO,
I know this is bad practice and that I am supposed to have one App per container, but is there a way for me to have two services running concurrently in the same container, and how would I go about writing the Dockerfile for it?
My current Dockerfile for the Flask (Backend) App:
FROM python:3.6.9-slim-buster
WORKDIR /app/flask_backend
ENV PYTHONPATH "${PYTHONPATH}:/app"
COPY ./flask_backend ./
COPY requirements.txt .
RUN pip install -r requirements.txt
CMD python3 app/webapp/app.py
My React (Frontend) Dockerfile:
FROM node:12.18.0-alpine as build
WORKDIR /app/react_frontend
ENV PATH /app/node_modules/.bin:$PATH
ENV NODE_OPTIONS="--max-old-space-size=8192"
COPY ./react_frontend/package.json ./
COPY ./react_frontend/package-lock.json ./
RUN npm ci
RUN npm install react-scripts#3.4.1 -g
RUN npm install serve -g
COPY ./react_frontend ./
CMD ["serve", "-s", "build", "-l", "3000"]
My attempt to launch both apps within the same Docker Container was to merge the two Dockerfiles, but the resulting container does not have the data from the first Dockerfile, and I am unsure how to proceed.
My merged Dockerfile:
FROM python:3.6.9-slim-buster
WORKDIR /app/flask_backend
ENV PYTHONPATH "${PYTHONPATH}:/app"
COPY ./flask_backend ./
COPY requirements.txt .
RUN pip install -r requirements.txt
CMD python3 app/webapp/app.py
FROM node:12.18.0-alpine as build
WORKDIR /app/react_frontend
ENV PATH /app/node_modules/.bin:$PATH
ENV NODE_OPTIONS="--max-old-space-size=8192"
COPY ./react_frontend/package.json ./
COPY ./react_frontend/package-lock.json ./
RUN npm ci
RUN npm install react-scripts#3.4.1 -g
RUN npm install serve -g
COPY ./react_frontend ./
CMD ["serve", "-s", "build", "-l", "3000"]
I am a beginner in using Docker, and hence I forsee that there will be several problems, such as communications between the two apps (Backend uses port 5000), using this method. Any guidiance will be greatly appreciated!
A React application doesn't usually have a server per se (development-only Docker setups aside). Instead, you run a tool like Webpack to compile it down to static files, which you can then serve to the browser, which then runs them.
On your host system you'd run something like
yarn build
which produces a dist directory; then you'd copy this into your Flask static directory.
If you do this entirely ahead-of-time, then you can run your application out of a Python virtual environment, which will be a much easier development and test setup, and the Dockerfile you show won't change.
If you want to build this entirely in Docker (for example to take advantage of a more Docker-native automated build system) a multi-stage build matches well here. You can use a first stage to build the front-end application, and then COPY that into the final application in the second stage. That looks roughly like:
FROM node:12.18.0-alpine as build
WORKDIR /app/react_frontend
COPY ./react_frontend/package.json ./
COPY ./react_frontend/package-lock.json ./
RUN npm ci
COPY ./react_frontend ./
RUN npm run build
FROM python:3.6.9-slim-buster
WORKDIR /app/flask_backend
ENV PYTHONPATH "${PYTHONPATH}:/app"
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY ./flask_backend ./
COPY --from=build /app/react_frontend/dist/ ./static/
CMD python3 app/webapp/app.py
This approach is not compatible with setups that overwrite Docker image contents using bind mounts. A non-Docker host Node and Python setup will be a much easier development environment, and for this particular setup isn't likely to be substantially different from the Docker setup.

NestJS minimize dockerfile

I want to dockerize my nestjs api. With the config listed below, the image gets 319MB big. What would be a more simple way to reduce the image size, than multi staging?
Dockerfile
FROM node:12.13-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
CMD npm start
.dockerignore
.git
.gitignore
node_modules/
dist/
For reducing docker image size you can use
Multi-stage build
Npm prune
While using multi-stage build you should have 2(or more) FROM directives, as usual, the first stage does build, and the second stage just copies build from the first temporary layer and have instructions for run the app. In our case, we should copy dist & node_modules directories.
The second important moment its correctly split dependencies between 'devDependencies' & 'dependencies' in your package.json file.
After you install deps in the first stage, you should use npm prune --production for remove devDependencies from node modules.
FROM node:12.14.1-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . ./
RUN npm run build && npm prune --production
FROM node:12.14.1-alpine
WORKDIR /app
ENV NODE_ENV=production
COPY --from=build /app/dist /app/dist
COPY --from=build /app/node_modules /app/node_modules
EXPOSE 3000
ENTRYPOINT [ "node" ]
CMD [ "dist/main.js" ]
If you have troubles with node-gyp or just want to see - a full example with comments in this gist:
https://gist.github.com/nzvtrk/cba2970b1df9091b520811e521d9bd44
More useful references:
https://docs.docker.com/develop/develop-images/multistage-build/
https://docs.npmjs.com/cli/prune

Does automatically updating my package.json at commit time disable docker build to reuse cache?

I've just started to get deep inside dockerfile syntax.
Here is the one I use currently:
FROM node:12-alpine as install
WORKDIR /Backend-graphql
COPY ./src ./src
COPY ./index.js ./index.js
COPY ./schema.graphql ./schema.graphql
COPY ./package.json ./
COPY ./package-lock.json ./package-lock.json
RUN npm install
FROM node:12-alpine as prismawork
WORKDIR /PrismaWork
COPY --from=install /Backend-graphql .
COPY ./datamodel.prisma ./datamodel.prisma
COPY ./prisma.yml ./prisma.yml
RUN npx prisma deploy
RUN npx prisma generate
FROM node:12-alpine
#curl needed for healthcheck
RUN apk --update --no-cache add curl
WORKDIR /app
COPY --from=prismawork /PrismaWork .
ENTRYPOINT ["npm", "start"]
EXPOSE 4000
From personnals tests and documentations founds online, i've respected the following advice :
Use multi-stage builds
But I notice something, docker do not reuse cache after the first COPY layer different in current and followings build stages. And I think its a problem, because I use an automatic bump version git hook based on commit message semantic versionning syntax who modifies my package.json. So, at each commit docker build re-RUN npm install and subsequents layers.
First of all, have I understood docker cache layering system ?
Secondly, should I use an other file for automatic bumping version and COPY it at the very end of my Dockerfile ?
First of all, have I understood docker cache layering system?
Yes, you should.
If any change happens in any step, like changes in package.json, the docker will rebuild the rest of the steps.
No need to copy from the same image multiple times. We are also doing npm install after performing unrelated steps to catching other steps.
FROM node:12-alpine
#curl needed for healthcheck
RUN apk --update --no-cache add curl
WORKDIR /app
COPY ./src ./src
COPY ./index.js ./index.js
COPY ./schema.graphql ./schema.graphql
COPY ./datamodel.prisma ./datamodel.prisma
COPY ./prisma.yml ./prisma.yml
COPY ./package.json ./
COPY ./package-lock.json ./package-lock.json
RUN npm install
RUN npx prisma deploy
RUN npx prisma generate
ENTRYPOINT ["npm", "start"]
EXPOSE 4000
Multiple staging useful when you need to build between multiple images like this example:
FROM node:12-alpine
RUN npm install -g gzipper
WORKDIR /build
ADD . .
RUN npm install
ARG CONFIGURATION
RUN npm run build:${CONFIGURATION}
RUN gzipper --gzip-level=6 ./dist
FROM nginx:latest
WORKDIR /usr/share/nginx/html
COPY --from=0 /build/dist .
COPY nginx/default.conf /etc/nginx/conf.d/default.conf

Resources