Docker build not using cache for reactjs project using yarn - docker

I have the following Dockerfile
# Base on offical Node.js Alpine image
FROM node:16-alpine3.14
# Set working directory
WORKDIR /usr/app
# Install PM2 globally
RUN npm install --global pm2
# Copy package.json and package-lock.json before other files
# Utilise Docker cache to save re-installing dependencies if unchanged
COPY ./package.json ./
COPY ./yarn.lock ./
# Install dependencies
RUN yarn install
# Copy all files
COPY ./ ./
# Build app
RUN yarn build
# Expose the listening port
EXPOSE 3000
# Run container as non-root (unprivileged) user
# The node user is provided in the Node.js Alpine base image
USER node
# Run npm start script when container starts
# CMD [ "npm", "start" ]
CMD [ "pm2-runtime", "npm", "--", "start" ]
I am trying to build the image
docker build -t test .
After first time building I am trying to build again. without any change in the context.
I see it uses cache till some extent only. and and after that it does not use cache
# Install dependencies
RUN yarn install
HOw to make it use cache if there is no changes in the code.

Related

Dockerize NextJS Application with Prisma

I have created a NextJS application, to connect to the database I use Prisma. When I start the application on my computer everything works. Unfortunately, I get error messages when I try to run the application in a Docker container. The container can be created and started. The start page of the application can also be shown (there are no database queries there yet). However, when I click on the first page where there is a database query I get error code 500 - Initial Server Error and the following error message in the console:
PrismaClientInitializationError: Unknown PRISMA_QUERY_ENGINE_LIBRARY undefined. Possible binaryTargets: darwin, darwin-arm64, debian-openssl-1.0.x, debian-openssl-1.1.x, rhel-openssl-1.0.x, rhel-openssl-1.1.x, linux-arm64-openssl-1.1.x, linux-arm64-openssl-1.0.x, linux-arm-openssl-1.1.x, linux-arm-openssl-1.0.x, linux-musl, linux-nixos, windows, freebsd11, freebsd12, openbsd, netbsd, arm, native or a path to the query engine library.
You may have to run prisma generate for your changes to take effect.
at cb (/usr/src/node_modules/#prisma/client/runtime/index.js:38689:17)
at async getServerSideProps (/usr/src/.next/server/pages/admin/admin.js:199:20)
at async Object.renderToHTML (/usr/src/node_modules/next/dist/server/render.js:428:24)
at async doRender (/usr/src/node_modules/next/dist/server/next-server.js:1144:38)
at async /usr/src/node_modules/next/dist/server/next-server.js:1236:28
at async /usr/src/node_modules/next/dist/server/response-cache.js:64:36 {
clientVersion: '3.6.0',
errorCode: undefined
}
My Dockerfile:
# Dockerfile
# base image
FROM node:16-alpine3.12
# create & set working directory
RUN mkdir -p /usr/src
WORKDIR /usr/src
# copy source files
COPY . /usr/src
COPY package*.json ./
COPY prisma ./prisma/
# install dependencies
RUN npm install
COPY . .
# start app
RUN npm run build
EXPOSE 3000
CMD npm run start
My docker-compose.yaml:
version: "3"
services:
web:
build:
context: .
dockerfile: Dockerfile
container_name: web
restart: always
volumes:
- ./:/usr/src/app
ports:
- "3000:3000"
env_file:
- .env
My package.json:
{
"name": "supermarket",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start",
"lint": "next lint"
},
"prisma": {
"schema": "prisma/schema.prisma"
},
"dependencies": {
"#prisma/client": "^3.6.0",
"axios": "^0.22.0",
"cookie": "^0.4.1",
"next": "latest",
"nodemailer": "^6.6.5",
"react": "17.0.2",
"react-cookie": "^4.1.1",
"react-dom": "17.0.2"
},
"devDependencies": {
"eslint": "7.32.0",
"eslint-config-next": "11.1.2",
"prisma": "^3.6.0"
}
}
I've found the error. I think it's a problem with the M1 Chip.
I changed node:16-alpine3.12 to node:lts and added some commands to the Dockerfile which looks like this now:
# base image
FROM node:lts
# create & set working directory
RUN mkdir -p /usr/src
WORKDIR /usr/src
# copy source files
COPY . /usr/src
COPY package*.json ./
COPY prisma ./prisma/
RUN apt-get -qy update && apt-get -qy install openssl
# install dependencies
RUN npm install
RUN npm install #prisma/client
COPY . .
RUN npx prisma generate --schema ./prisma/schema.prisma
# start app
RUN npm run build
EXPOSE 3000
CMD npm run start
I hope this can also help other people 😊
I have been having a similar issue, which I have just solved.
I think what you need to do is change the last block in your docker file to this
# start app
RUN npm run build
RUN npx prism generate
EXPOSE 3000
CMD npm run start
I think that will solve your issue.
I've found this solution with some workarounds:
https://gist.github.com/malteneuss/a7fafae22ea81e778654f72c16fe58d3
In short:
# Dockerfile
...
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npx prisma generate # <---important to support Prisma query engine in Alpine Linux in final image
RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
...
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
COPY --chown=nextjs:nodejs prisma ./prisma/ # <---important to support Prisma DB migrations in bootstrap.sh
COPY --chown=nextjs:nodejs bootstrap.sh ./
...
CMD ["./bootstrap.sh"]
This Dockerfile is based on the official Nextjs with Docker example project and adapted to include Prisma. To run migrations on app start we can add a bash script that does so:
# bootstrap.sh
#!/bin/sh
# Run migrations
DATABASE_URL="postgres://postgres:postgres#db:5432/appdb?sslmode=disable" npx prisma migrate deploy
# start app
DATABASE_URL="postgres://postgres:postgres#db:5432/workler?sslmode=disable" node server.js
Unfortunately, we need to explicitly set the DATABASE_URL here, otherwise migrations don't work, because Prisma can't find the environment variable (e.g. from a docker-compose file).
And last but not least, because Alpine Linux base image uses a Musl C-library, the Prisma client has to be compiled in the builder image against that. So, to get the correct version, we need to add this info to Prisma's schema.prisma file:
# schema.prisma
generator client {
provider = "prisma-client-js"
binaryTargets = ["native", "linux-musl"] # <---- important to support Prisma Query engine in Alpine linux, otherwise "PrismaClientInitializationError2 [PrismaClientInitializationError]: Query engine binary for current platform "linux-musl" could not be found."
}
I had a luck this way:
FROM node:17-slim as dependencies
# set working directory
WORKDIR /usr/src/app
# Copy package and lockfile
COPY package.json ./
COPY yarn.lock ./
COPY prisma ./prisma/
RUN apt-get -qy update && apt-get -qy install openssl
# install dependencies
RUN yarn --frozen-lockfile
COPY . .
# ---- Build ----
FROM dependencies as build
# install all dependencies
# build project
RUN yarn build
# ---- Release ----
FROM dependencies as release
# copy build
COPY --from=build /usr/src/app/.next ./.next
COPY --from=build /usr/src/app/public ./public
# dont run as root
USER node
# expose and set port number to 3000
EXPOSE 3000
ENV PORT 3000
# enable run as production
ENV NODE_ENV=production
# start app
CMD ["yarn", "start"]

Flask and React App in single Docker Container

Good day SO,
I know this is bad practice and that I am supposed to have one App per container, but is there a way for me to have two services running concurrently in the same container, and how would I go about writing the Dockerfile for it?
My current Dockerfile for the Flask (Backend) App:
FROM python:3.6.9-slim-buster
WORKDIR /app/flask_backend
ENV PYTHONPATH "${PYTHONPATH}:/app"
COPY ./flask_backend ./
COPY requirements.txt .
RUN pip install -r requirements.txt
CMD python3 app/webapp/app.py
My React (Frontend) Dockerfile:
FROM node:12.18.0-alpine as build
WORKDIR /app/react_frontend
ENV PATH /app/node_modules/.bin:$PATH
ENV NODE_OPTIONS="--max-old-space-size=8192"
COPY ./react_frontend/package.json ./
COPY ./react_frontend/package-lock.json ./
RUN npm ci
RUN npm install react-scripts#3.4.1 -g
RUN npm install serve -g
COPY ./react_frontend ./
CMD ["serve", "-s", "build", "-l", "3000"]
My attempt to launch both apps within the same Docker Container was to merge the two Dockerfiles, but the resulting container does not have the data from the first Dockerfile, and I am unsure how to proceed.
My merged Dockerfile:
FROM python:3.6.9-slim-buster
WORKDIR /app/flask_backend
ENV PYTHONPATH "${PYTHONPATH}:/app"
COPY ./flask_backend ./
COPY requirements.txt .
RUN pip install -r requirements.txt
CMD python3 app/webapp/app.py
FROM node:12.18.0-alpine as build
WORKDIR /app/react_frontend
ENV PATH /app/node_modules/.bin:$PATH
ENV NODE_OPTIONS="--max-old-space-size=8192"
COPY ./react_frontend/package.json ./
COPY ./react_frontend/package-lock.json ./
RUN npm ci
RUN npm install react-scripts#3.4.1 -g
RUN npm install serve -g
COPY ./react_frontend ./
CMD ["serve", "-s", "build", "-l", "3000"]
I am a beginner in using Docker, and hence I forsee that there will be several problems, such as communications between the two apps (Backend uses port 5000), using this method. Any guidiance will be greatly appreciated!
A React application doesn't usually have a server per se (development-only Docker setups aside). Instead, you run a tool like Webpack to compile it down to static files, which you can then serve to the browser, which then runs them.
On your host system you'd run something like
yarn build
which produces a dist directory; then you'd copy this into your Flask static directory.
If you do this entirely ahead-of-time, then you can run your application out of a Python virtual environment, which will be a much easier development and test setup, and the Dockerfile you show won't change.
If you want to build this entirely in Docker (for example to take advantage of a more Docker-native automated build system) a multi-stage build matches well here. You can use a first stage to build the front-end application, and then COPY that into the final application in the second stage. That looks roughly like:
FROM node:12.18.0-alpine as build
WORKDIR /app/react_frontend
COPY ./react_frontend/package.json ./
COPY ./react_frontend/package-lock.json ./
RUN npm ci
COPY ./react_frontend ./
RUN npm run build
FROM python:3.6.9-slim-buster
WORKDIR /app/flask_backend
ENV PYTHONPATH "${PYTHONPATH}:/app"
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY ./flask_backend ./
COPY --from=build /app/react_frontend/dist/ ./static/
CMD python3 app/webapp/app.py
This approach is not compatible with setups that overwrite Docker image contents using bind mounts. A non-Docker host Node and Python setup will be a much easier development environment, and for this particular setup isn't likely to be substantially different from the Docker setup.

Docker ExpressJS public folder volume

I want to create a volume for my "public" folder on Docker on Express App. Because when users upload pictures, I save them to "public/uploads", but when I make changes on code, and have to rebuild with docker-compose run --build, I lose all these images.
I tried to find a way to create a volume but I don't know how to link it.
My Dockerfile only consist of these:
FROM node:8.10.0-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
# RUN npm ci --only=production
COPY . .
CMD [ "npm", "start" ]
My goal is to serve uploaded images from "public/uploads", and don't get them removed upon docker-compose run --build.
According to the official documentation, you can use the --mount flag:
//Dockerfile
FROM node:8.10.0-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
# RUN npm ci --only=production
RUN --mount=target=/some_location_in_file_system,type=bind,source=public/uploads
COPY . .
CMD [ "npm", "start" ]

Versioning of a nodejs project and dockerizing it with minimum image diff

The npm version is located at package.json.
I have a Dockerfile, simplified as follows:
FROM NODE:carbon
COPY ./package.json ${DIR}/
RUN npm install
COPY . ${DIR}
RUN npm build
Correct my understanding,
If ./package.json changes, is it true that the writable docker image layers changes are from 2 to 5?
Assuming that I do not have any changes on npm package dependencies,
How could I change the project version but I do not want docker rebuild image layer for RUN npm install ?
To sum up, the behavior you describe using Docker is fairly standard (as soon as package.json has changed and has a different hash, COPY package.json ./ will be trigerred again as well as each subsequent Dockerfile command).
Thus, the docker setup outlined in the official doc of Node.js does not alleviate this, and proposes the following Dockerfile:
FROM node:carbon
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
But if you really want to find ways to avoid rerunning npm install from scratch most of the time, you could just maintain two different files package.json and, say, package-deps.json, import (and rename) package-deps.json and run npm install, then use the proper package.json afterwards.
And if you want to have more checks to be sure that the dependencies of both files are not out-of-sync, it happens that the role of file package-lock.json may be of some help, if you use the new npm ci feature that comes with npm 5.8 (cf. the corresponding changelog) instead of using npm install.
In this case, as the latest version of npm available in Docker Hub is npm 5.6, you'll need to upgrade it beforehand.
All things put together, here is a possible Dockerfile for this use case:
FROM node:carbon
# Create app directory
WORKDIR /usr/src/app
# Upgrade npm to have "npm ci" available
RUN npm install -g npm#5.8.0
# Import conf files for dependencies
COPY package-lock.json package-deps.json ./
# Note that this REQUIRES to run the command "npm install --package-lock-only"
# before running "docker build …" and also REQUIRES a "package-deps.json" file
# that is in sync w.r.t. package.json's dependencies
# Install app dependencies
RUN mv package-deps.json package.json && npm ci
# Note that "npm ci" checks that package.json and package-lock.json are in sync
# COPY package.json ./ # subsumed by the following command
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
Disclaimer: I did not try the solution above on a realistic example as I'm not a regular node user, but you may view this as a useful workaround for the dev phase…

Docker multi target build failed on Docker Hub

For unknown reason build failed on Docker Hub when it trying to copy from a just built target into the same Dockerfile. When, I try on local machine (Fedora 27, Docker CE 17.12), the build succeed.
Here the failed build log : https://hub.docker.com/r/emmanuelgautier/react-app/builds/bsygsbahuzdxfbsqr5r9er4/
Folder /usr/src/app/build doesn't exist in the second image because according to documentation:
CMD does not execute anything at build time, but specifies the
intended command for the image.
RUN should be used instead of CMD where yarn build command executes.
The correct Dockerfile is:
## Development environment target
FROM node as dev-env
WORKDIR /usr/src/app
COPY [ "package*.json", "yarn.lock", "./" ]
RUN yarn install
COPY . .
EXPOSE 3000
ENTRYPOINT [ "./docker-entrypoint.sh" ]
## Build environment target
FROM node as build-env
WORKDIR /usr/src/app
COPY [ "package*.json", "yarn.lock", "./" ]
RUN yarn install --production
COPY . .
RUN yarn build
## Production environement target
FROM nginx as production-env
LABEL MAINTAINER Emmanuel Gautier <docker#emmanuelgautier.fr>
COPY --from=1 /usr/src/app/build /usr/share/nginx/html
EXPOSE 443 80

Resources