I have the following project separated into packages as follows:
libs/one/package.json
libs/one/index.js
libs/two/package.json
libs/two/index.js
package.json
I am looking for a way to copy all libs/*/package.json files in one line. in addition I want to avoid reinstalling all packages again and reading them from the cache.
this is my docker file
FROM node:16.14-alpine as base
RUN npm install -g lerna#4.0.0
WORKDIR /usr/app
FROM base as builder
ARG SERVICE_DIR
ARG TSCONFIG_BUILD_FILE
COPY .npmrc .
COPY lerna.json .
COPY ${TSCONFIG_BUILD_FILE} ./tsconfig.build.json
COPY package*.json ./
# copy package dependencies
COPY libs/service-http-handler/package.json ./libs/service-http-handler/
COPY libs/utils/package.json ./libs/utils/
COPY libs/commons/package.json ./libs/commons/
COPY libs/cache/package.json ./libs/cache/
COPY libs/internal-dto/package.json ./libs/internal-dto/
COPY libs/clients/package.json ./libs/clients/
COPY ${SERVICE_DIR}/package.json ./${SERVICE_DIR}/
# install service and package dependencies
RUN lerna bootstrap -- --production --no-optional --ignore-scripts --include-dependencies --scope ${SERVICE_DIR}
# Copy all libs and build by dependency order
COPY libs/ ./libs
COPY ${SERVICE_DIR}/ ./${SERVICE_DIR}
RUN lerna run build --include-dependencies --scope $(echo ${SERVICE_DIR} | cut -d'/' -f 2)
# copy recursive and follow symbolic link
RUN cp -RL ./node_modules/ /tmp/node_modules/
# Runner
FROM base
ARG SERVICE_DIR
# copy runtime dependencies
COPY --from=builder /tmp/node_modules/ ./node_modules/
# Copy runtime service
COPY --from=builder /usr/app/${SERVICE_DIR}/dist/ ./dist/
COPY ./${SERVICE_DIR}/package.json ./
EXPOSE 5001
#TODO - modify start script in production mode
CMD ["npm", "run", "start"]
any idea how I can do it without the need to run the step below on any changes in libs sources files.
RUN lerna bootstrap -- --production --no-optional --ignore-scripts --include-dependencies --scope ${SERVICE_DIR}
Related
I have a problem implementing sharp for Dockerfile.
Error: 'sharp' is required to be installed in standalone mode for the image
optimization to function correctly
Next.js with sharp works fine for local developing:
next 12.0.1
sharp 0.30.2
node 16.xx
npm 8.xx
OS - macOS Monterey - 12.2.1, M1 PRO
next.config.js
module.exports = {
experimental: {
outputStandalone: true,
},
}
Dockerfile:
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/next-i18next.config.js ./
COPY --from=builder /app/next-sitemap.js ./cd
COPY --from=builder /app/jsconfig.json ./jsconfig.json
COPY --from=builder /app/data/ ./data
COPY --from=builder /app/components/ ./components
COPY --from=builder /app/utils/ ./utils
COPY --from=builder /app/assets/ ./assets
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
.env file:
NEXT_SHARP_PATH=/tmp/node_modules/sharp next start
Sharp is installed in package.json
I checked both Next/Vercel tuts:
https://nextjs.org/docs/messages/install-sharp
https://nextjs.org/docs/messages/sharp-missing-in-production
RUN Docker:
docker build --no-cache . -t website-app && docker run --name website -p 3000:3000 website-app
I figured it out.
I removed NEXT_SHARP_PATH=/tmp/node_modules/sharp next start from .env
Local docker doesn't work with node-sharp without NEXT_SHARP_PATH and that is weird.
But I deployed it in my K8s Cluster with Docker and it works as is expected.
npm i sharp
this fix the problem for me
What works for me, is to change the version of Alpine Linux in the Dockerfile:
FROM node:18-alpine AS deps (instead of node:16)
Adding or removing node-sharp in my .env file didn't work for me.
For the final step (the one labeled "runner") in your Dockerfile, replace the base image with node:16-slim. This image is Debian-based, so it is approximately 20 MB larger than the alpine variant, but it has the binaries required to run sharp.
When using a similar Dockerfile to yours, I found that the NEXT_SHARP_PATH environment variable was not needed when using a Debian-based Node image.
And for reference, here is the NextJS documentation about the error message: https://nextjs.org/docs/messages/sharp-missing-in-production
Update: You may also be able to specify the libc implementation found in your base Docker image by using the following flags:
RUN npm_config_platform=linux npm_config_arch=x64 npm_config_libc=glibc npm ci
For more information about what these flags are, see the documentation for sharp.
I have the following Dockerfile:
FROM node:12.4.0 as builder
COPY ./package.json /src/package.json
ENV PORT 3000
WORKDIR /src
RUN npm install
COPY ./lerna.json /src/lerna.json
COPY ./packages/package-api/package.json /src/packages/package-api/package.json
COPY ./packages/package-config/package.json /src/packages/package-config/package.json
COPY ./packages/package-sitemap/package.json /src/packages/package-sitemap/package.json
COPY ./packages/package-utils/package.json /src/packages/package-utils/package.json
RUN npm run clean
COPY . /src
WORKDIR /src/packages/package-sitemap
RUN npm run build
FROM node:12.4.0
RUN echo "starting 2nd stage"
WORKDIR /src/packages/package-sitemap
COPY --from=builder /src/packages/package-sitemap/build/bundle.js .
EXPOSE ${PORT}
CMD ["node" , "bundle.js"]
running the bundle built from such Dockerfile results in following error: Error: Cannot find module 'debug'.
However if I remove the second stage and build the bundle in one stage there's no error and the bundle runs successfully, that is with the following Dockerfile:
FROM node:12.4.0 as builder
COPY ./package.json /src/package.json
ENV PORT 3000
WORKDIR /src
RUN npm install
COPY ./lerna.json /src/lerna.json
COPY ./packages/package-api/package.json /src/packages/package-api/package.json
COPY ./packages/package-config/package.json /src/packages/package-config/package.json
COPY ./packages/package-sitemap/package.json /src/packages/package-sitemap/package.json
COPY ./packages/package-utils/package.json /src/packages/package-utils/package.json
RUN npm run clean
COPY . /src
WORKDIR /src/packages/leaf-sitemap
RUN npm run build
EXPOSE ${PORT}
CMD ["node" , "build/bundle.js"]
I'm wondering what the difference can be? Is it because in the 2nd Dockerfile node_modules exist as well and webpack somehow sets paths to some modules in the bundle to the node_modules?
I want to dockerize my nestjs api. With the config listed below, the image gets 319MB big. What would be a more simple way to reduce the image size, than multi staging?
Dockerfile
FROM node:12.13-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
CMD npm start
.dockerignore
.git
.gitignore
node_modules/
dist/
For reducing docker image size you can use
Multi-stage build
Npm prune
While using multi-stage build you should have 2(or more) FROM directives, as usual, the first stage does build, and the second stage just copies build from the first temporary layer and have instructions for run the app. In our case, we should copy dist & node_modules directories.
The second important moment its correctly split dependencies between 'devDependencies' & 'dependencies' in your package.json file.
After you install deps in the first stage, you should use npm prune --production for remove devDependencies from node modules.
FROM node:12.14.1-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . ./
RUN npm run build && npm prune --production
FROM node:12.14.1-alpine
WORKDIR /app
ENV NODE_ENV=production
COPY --from=build /app/dist /app/dist
COPY --from=build /app/node_modules /app/node_modules
EXPOSE 3000
ENTRYPOINT [ "node" ]
CMD [ "dist/main.js" ]
If you have troubles with node-gyp or just want to see - a full example with comments in this gist:
https://gist.github.com/nzvtrk/cba2970b1df9091b520811e521d9bd44
More useful references:
https://docs.docker.com/develop/develop-images/multistage-build/
https://docs.npmjs.com/cli/prune
I've just started to get deep inside dockerfile syntax.
Here is the one I use currently:
FROM node:12-alpine as install
WORKDIR /Backend-graphql
COPY ./src ./src
COPY ./index.js ./index.js
COPY ./schema.graphql ./schema.graphql
COPY ./package.json ./
COPY ./package-lock.json ./package-lock.json
RUN npm install
FROM node:12-alpine as prismawork
WORKDIR /PrismaWork
COPY --from=install /Backend-graphql .
COPY ./datamodel.prisma ./datamodel.prisma
COPY ./prisma.yml ./prisma.yml
RUN npx prisma deploy
RUN npx prisma generate
FROM node:12-alpine
#curl needed for healthcheck
RUN apk --update --no-cache add curl
WORKDIR /app
COPY --from=prismawork /PrismaWork .
ENTRYPOINT ["npm", "start"]
EXPOSE 4000
From personnals tests and documentations founds online, i've respected the following advice :
Use multi-stage builds
But I notice something, docker do not reuse cache after the first COPY layer different in current and followings build stages. And I think its a problem, because I use an automatic bump version git hook based on commit message semantic versionning syntax who modifies my package.json. So, at each commit docker build re-RUN npm install and subsequents layers.
First of all, have I understood docker cache layering system ?
Secondly, should I use an other file for automatic bumping version and COPY it at the very end of my Dockerfile ?
First of all, have I understood docker cache layering system?
Yes, you should.
If any change happens in any step, like changes in package.json, the docker will rebuild the rest of the steps.
No need to copy from the same image multiple times. We are also doing npm install after performing unrelated steps to catching other steps.
FROM node:12-alpine
#curl needed for healthcheck
RUN apk --update --no-cache add curl
WORKDIR /app
COPY ./src ./src
COPY ./index.js ./index.js
COPY ./schema.graphql ./schema.graphql
COPY ./datamodel.prisma ./datamodel.prisma
COPY ./prisma.yml ./prisma.yml
COPY ./package.json ./
COPY ./package-lock.json ./package-lock.json
RUN npm install
RUN npx prisma deploy
RUN npx prisma generate
ENTRYPOINT ["npm", "start"]
EXPOSE 4000
Multiple staging useful when you need to build between multiple images like this example:
FROM node:12-alpine
RUN npm install -g gzipper
WORKDIR /build
ADD . .
RUN npm install
ARG CONFIGURATION
RUN npm run build:${CONFIGURATION}
RUN gzipper --gzip-level=6 ./dist
FROM nginx:latest
WORKDIR /usr/share/nginx/html
COPY --from=0 /build/dist .
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
I am using yarn workspaces and I have this packages in my package.json:
"workspaces": ["packages/*"]
I am trying to create a docker image to deploy and I have the following Dockerfile:
# production dockerfile
FROM node:9.2
# add code
COPY ./packages/website/dist /cutting
WORKDIR /cutting
COPY package.json /cutting/
RUN yarn install --pure-lockfile && yarn cache clean --production
CMD npm run serve
But I get the following error:
error An unexpected error occurred:
"https://registry.yarnpkg.com/#cutting%2futil: Not found"
#cutting/util is the name of one of my workspace packages.
So the problem is that there is no source code in the docker image so it is trying to install it from yarnpkg.
what is the best way to handle workspaces when deploying to a docker image.
This code won't work outside of the docker vm, so it will refuse in the docker, too.
The problem is you have built a code, and copy the bundled code. The yarn workspaces is looking for a package.json that you don't have in the dist folder. The workspaces is just creating a link in a common node_modules folder to the other workspace that you are using. The source code is needed there. (BTW why don't you build code inside the docker vm? That way source code and dist would also be available.)
Here is my dockerfile. I use yarn workspaces and lerna, but without lerna should be similar. You want to build your shared libraries and then test the build works locally by running your code in your dist folder.
###############################################################################
# Step 1 : Builder image
FROM node:11 AS builder
WORKDIR /usr/src/app
ENV NODE_ENV production
RUN npm i -g yarn
RUN npm i -g lerna
COPY ./lerna.json .
COPY ./package* ./
COPY ./yarn* ./
COPY ./.env .
COPY ./packages/shared/ ./packages/shared
COPY ./packages/api/ ./packages/api
# Install dependencies and build whatever you have to build
RUN yarn install --production
RUN lerna bootstrap
RUN cd /usr/src/app/packages/shared && yarn build
RUN cd /usr/src/app/packages/api && yarn build
###############################################################################
# Step 2 : Run image
FROM node:11
LABEL maintainer="Richard T"
LABEL version="1.0"
LABEL description="This is our dist docker image"
RUN npm i -g yarn
RUN npm i -g lerna
ENV NODE_ENV production
ENV NPM_CONFIG_LOGLEVEL error
ARG PORT=3001
ENV PORT $PORT
WORKDIR /usr/src/app
COPY ./package* ./
COPY ./lerna.json ./
COPY ./.env ./
COPY ./yarn* ./
COPY --from=builder /usr/src/app/packages/shared ./packages/shared
COPY ./packages/api/package* ./packages/api/
COPY ./packages/api/.env* ./packages/api/
COPY --from=builder /usr/src/app/packages/api ./packages/api
RUN yarn install
CMD cd ./packages/api && yarn start-production
EXPOSE $PORT
###############################################################################