Dockerfile copy files between in host before run in container - docker

I am using semantic-ui which requires semantic.json located in root folder and set setting autoInstall: true when using Dockerfile.
In case I want to use custom theme, I need to rebuild semantic-ui which gulp requires semantic.json located in root-fooder/semantic/. That means, before building the image, the semantic.json should be located in semantic folder, copy to root folder then, after npn install and ng server, it should remove semantic.json in root folder to let gulp can run.
FROM node:8.11.3
WORKDIR /app
# add `/usr/src/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app
COPY package-lock.json /app
COPY semantic.json /app
RUN npm install -g npm#latest \
&& npm install -g n \
&& npm install -g #angular/cli \
&& npm install -g gulp \
&& npm install gulp \
&& npm install
# add app
COPY . /app
EXPOSE 4200
# start app
CMD ng serve --port 4200 --host 0.0.0.0
My question is how can I use Dockerfile to copy semantic.json file from semantic folder to root folder in host and build it, then remove it in root folder?

Instead of finding solution work-around, I update semantic.json to point base path to root folder and remaining paths will go to semantic as prefix as followings
"base": "",
"paths": {
"source": {
"config": "semantic/src/theme.config",
"definitions": "semantic/src/definitions/",
"site": "semantic/src/site/",
"themes": "semantic/src/themes/"
},
"output": {
"packaged": "semantic/dist/",
"uncompressed": "semantic/dist/components/",
"compressed": "semantic/dist/components/",
"themes": "semantic/dist/themes/"
},
"clean": "semantic/dist/"
},
"permission": false,
"autoInstall": true,
"rtl": false,
"version": "2.3.3"

Related

NextJS revalidate doesn't work with standalone server

It looks like the revalidation option doesn't work when using the standalone server of NextJS.
I got this:
return {
props: {
page,
},
revalidate: 60,
};
I have the following NextJS config:
{
output: "standalone",
reactStrictMode: true,
swcMinify: true,
i18n: {
locales: ["default", "en", "nl", "fr"],
defaultLocale: "default",
localeDetection: false,
},
trailingSlash: true,
}
And I use the following docker file to create a container:
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
Tried many things myself but nothing works and on the internet I keep coming across that you have to run on a NextJS server, but if I'm correct, that is what you do when running it standalone.
Got into the issue more and found some more errors. Found out that the next/image component's default loader uses squoosh because it is quick to install and suitable for a development environment. But for a production environment using output: "standalone", you must install sharp.
So I runned
yarn add sharp
And that error was not happening anymore. But still I didn't get an update on my content. Because I could not get this working and I need to continue with the project I checked out On-Demand revalidation. Read more about it here:
https://nextjs.org/docs/basic-features/data-fetching/incremental-static-regeneration
So what I now did was:
create an API end-point in my NextJS solution to revalidate.
use the webhook from my CMS to call that api end-point
That did the trick for me. I could now just update something in my CMS, the webhook was triggered and then I got a new fresh build page with the changes!
In my eyes this is a beter solution because:
It only revalidate the page when it is really needed
After the change/revalidation the next visitor directly see's the new updated version
So i'm happy that it didn't worked and with an bit of extra efford I got a way better solution!
Hope this helps somebody.

Docker COPY command not copying a specific file

I have a folder dist having config.yaml, configuration.d.ts ,configuration.js and configuration.map files inside. The issue is all files are copied to container except the config.yaml file.
On debugging I found, If I write COPY dist dist before the line FROM abcd.com/baseos/node:buster-14.15.4-1 the config.yanl works. But I write COPY dist dist after the line FROM abcd.com/baseos/node:buster-14.15.4-1 it doesn't copy the config.yaml file but copies all other files configuration.d.ts ,configuration.js and configuration.map
The command to build my Dockerfile is below:-
docker build -t drs:1.0.0 -f . /srs/sync-data
Below is my Dockerfile
FROM abcd.com/baseos/node:buster-14.15.4-1 AS buildcontainer
COPY src src
COPY config config
COPY package*.json ./
COPY tsconfig.json .
COPY tsconfig.build.json .
COPY .eslintrc.js .
COPY .prettierrc .
RUN npm ci && \
npm run build && \
rm -rf node_modules && \
npm ci --production
FROM abcd.com/baseos/node:buster-14.15.4-1
ARG SERVICEVERSION=0.0.0-snapshot
ENV SERVICEVERSION=$SERVICEVERSION
COPY --from=buildcontainer dist dist
COPY --from=buildcontainer node_modules node_modules
COPY package.json .
CMD npm run start:prod
Since I'm using nestjs, I didn't copied over nest-cli.json in dockerfile. My nest-cli.json had below config:-
"compilerOptions": {
"assets": [{"include": "../config/*.yaml", "outDir": "./dist/config"}]
}
which tells the nest compiler to put it into dist folder.
Once I added below line in Dockerfile, it worked.
COPY .nest-cli.json .

Dockerize NextJS Application with Prisma

I have created a NextJS application, to connect to the database I use Prisma. When I start the application on my computer everything works. Unfortunately, I get error messages when I try to run the application in a Docker container. The container can be created and started. The start page of the application can also be shown (there are no database queries there yet). However, when I click on the first page where there is a database query I get error code 500 - Initial Server Error and the following error message in the console:
PrismaClientInitializationError: Unknown PRISMA_QUERY_ENGINE_LIBRARY undefined. Possible binaryTargets: darwin, darwin-arm64, debian-openssl-1.0.x, debian-openssl-1.1.x, rhel-openssl-1.0.x, rhel-openssl-1.1.x, linux-arm64-openssl-1.1.x, linux-arm64-openssl-1.0.x, linux-arm-openssl-1.1.x, linux-arm-openssl-1.0.x, linux-musl, linux-nixos, windows, freebsd11, freebsd12, openbsd, netbsd, arm, native or a path to the query engine library.
You may have to run prisma generate for your changes to take effect.
at cb (/usr/src/node_modules/#prisma/client/runtime/index.js:38689:17)
at async getServerSideProps (/usr/src/.next/server/pages/admin/admin.js:199:20)
at async Object.renderToHTML (/usr/src/node_modules/next/dist/server/render.js:428:24)
at async doRender (/usr/src/node_modules/next/dist/server/next-server.js:1144:38)
at async /usr/src/node_modules/next/dist/server/next-server.js:1236:28
at async /usr/src/node_modules/next/dist/server/response-cache.js:64:36 {
clientVersion: '3.6.0',
errorCode: undefined
}
My Dockerfile:
# Dockerfile
# base image
FROM node:16-alpine3.12
# create & set working directory
RUN mkdir -p /usr/src
WORKDIR /usr/src
# copy source files
COPY . /usr/src
COPY package*.json ./
COPY prisma ./prisma/
# install dependencies
RUN npm install
COPY . .
# start app
RUN npm run build
EXPOSE 3000
CMD npm run start
My docker-compose.yaml:
version: "3"
services:
web:
build:
context: .
dockerfile: Dockerfile
container_name: web
restart: always
volumes:
- ./:/usr/src/app
ports:
- "3000:3000"
env_file:
- .env
My package.json:
{
"name": "supermarket",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start",
"lint": "next lint"
},
"prisma": {
"schema": "prisma/schema.prisma"
},
"dependencies": {
"#prisma/client": "^3.6.0",
"axios": "^0.22.0",
"cookie": "^0.4.1",
"next": "latest",
"nodemailer": "^6.6.5",
"react": "17.0.2",
"react-cookie": "^4.1.1",
"react-dom": "17.0.2"
},
"devDependencies": {
"eslint": "7.32.0",
"eslint-config-next": "11.1.2",
"prisma": "^3.6.0"
}
}
I've found the error. I think it's a problem with the M1 Chip.
I changed node:16-alpine3.12 to node:lts and added some commands to the Dockerfile which looks like this now:
# base image
FROM node:lts
# create & set working directory
RUN mkdir -p /usr/src
WORKDIR /usr/src
# copy source files
COPY . /usr/src
COPY package*.json ./
COPY prisma ./prisma/
RUN apt-get -qy update && apt-get -qy install openssl
# install dependencies
RUN npm install
RUN npm install #prisma/client
COPY . .
RUN npx prisma generate --schema ./prisma/schema.prisma
# start app
RUN npm run build
EXPOSE 3000
CMD npm run start
I hope this can also help other people 😊
I have been having a similar issue, which I have just solved.
I think what you need to do is change the last block in your docker file to this
# start app
RUN npm run build
RUN npx prism generate
EXPOSE 3000
CMD npm run start
I think that will solve your issue.
I've found this solution with some workarounds:
https://gist.github.com/malteneuss/a7fafae22ea81e778654f72c16fe58d3
In short:
# Dockerfile
...
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npx prisma generate # <---important to support Prisma query engine in Alpine Linux in final image
RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
...
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
COPY --chown=nextjs:nodejs prisma ./prisma/ # <---important to support Prisma DB migrations in bootstrap.sh
COPY --chown=nextjs:nodejs bootstrap.sh ./
...
CMD ["./bootstrap.sh"]
This Dockerfile is based on the official Nextjs with Docker example project and adapted to include Prisma. To run migrations on app start we can add a bash script that does so:
# bootstrap.sh
#!/bin/sh
# Run migrations
DATABASE_URL="postgres://postgres:postgres#db:5432/appdb?sslmode=disable" npx prisma migrate deploy
# start app
DATABASE_URL="postgres://postgres:postgres#db:5432/workler?sslmode=disable" node server.js
Unfortunately, we need to explicitly set the DATABASE_URL here, otherwise migrations don't work, because Prisma can't find the environment variable (e.g. from a docker-compose file).
And last but not least, because Alpine Linux base image uses a Musl C-library, the Prisma client has to be compiled in the builder image against that. So, to get the correct version, we need to add this info to Prisma's schema.prisma file:
# schema.prisma
generator client {
provider = "prisma-client-js"
binaryTargets = ["native", "linux-musl"] # <---- important to support Prisma Query engine in Alpine linux, otherwise "PrismaClientInitializationError2 [PrismaClientInitializationError]: Query engine binary for current platform "linux-musl" could not be found."
}
I had a luck this way:
FROM node:17-slim as dependencies
# set working directory
WORKDIR /usr/src/app
# Copy package and lockfile
COPY package.json ./
COPY yarn.lock ./
COPY prisma ./prisma/
RUN apt-get -qy update && apt-get -qy install openssl
# install dependencies
RUN yarn --frozen-lockfile
COPY . .
# ---- Build ----
FROM dependencies as build
# install all dependencies
# build project
RUN yarn build
# ---- Release ----
FROM dependencies as release
# copy build
COPY --from=build /usr/src/app/.next ./.next
COPY --from=build /usr/src/app/public ./public
# dont run as root
USER node
# expose and set port number to 3000
EXPOSE 3000
ENV PORT 3000
# enable run as production
ENV NODE_ENV=production
# start app
CMD ["yarn", "start"]

Debugging docker image in VS code : cs file cannot be found:

I am trying to debug a docker image using the given description in this article.
I have created a Dockerfile like this :
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
RUN apt update && \
apt install procps -y && \
apt install unzip && \
curl -sSL https://aka.ms/getvsdbgsh | /bin/sh /dev/stdin -v latest -l /vsdbg
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY *.sln ./
COPY server/Business/NotificationModule.Business.csproj server/Business/
COPY server/Common/NotificationModule.Common.csproj server/Common/
COPY server/Data/NotificationModule.Data.csproj server/Data/
COPY server/DomainModel/NotificationModule.DomainModel.csproj server/DomainModel/
COPY server/Host/NotificationModule.Host.csproj server/Host/
RUN dotnet restore
COPY . .
WORKDIR "/src/server/Host/"
RUN dotnet build "NotificationModule.Host.csproj" -c Debug -o /app/build
FROM build AS publish
RUN dotnet publish "NotificationModule.Host.csproj" -c Debug -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "NotificationModule.dll"]
and I have added the entry for debugging docker images in launch.json like this :
{
"name": ".NET Core Docker Attach",
"type": "coreclr",
"request": "attach",
"processId": "${command:pickRemoteProcess}",
"pipeTransport": {
"pipeProgram": "docker",
"pipeArgs": [ "exec", "-i", "objective_torvalds" ],
"debuggerPath": "/vsdbg/vsdbg",
"pipeCwd": "${workspaceRoot}",
"quoteArgs": false
}
},
It seems that it is working and I can attach the debugger to my process but the problem is that when the debugger hits a breakpoint it can not find any cs files and shows an empty cs file instead.
I would like to ask if you know what I have done wrong.
UPDATE :
I have noticed that the debugger is looking for my cs files under src folder which apparantly doens't exist neither in my working directory nor in the image itself . So the question is why it is looking there.
OK I've got it . That was my mistake because I was using the same docker file that we have for production to copy pdb files into the docker image and those pdb files have been built on a docker container in a src directory so it was looking there .
I just copied the files from my bin/debug into the docker image and now it is workng perfectly.( and later I noticed that it was also mentioned in the article).
so here is the new DockerFile :
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
RUN apt update && \
apt install procps -y && \
apt install unzip && \
curl -sSL https://aka.ms/getvsdbgsh | /bin/sh /dev/stdin -v latest -l /vsdbg
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY bin/debug .
ENTRYPOINT ["dotnet", "NotificationModule.dll"]
and BTW you shouldn't add bin into your .dockerignore file
I had the same problem - debugger worked (I could see variables values), but the debugger couldn't find my .cs files:
It turned out that the default value of sourceFileMap in launch.json was incorrect, I found information how to use it properly at OmniSharp Github.
Here's how to fix the issue:
After I clicked Create File button in the popup with the error I was able to check where the debugger was looking for the source file. In my case the error message was:
Unable to write file '/out/src/MyApp/WebDriverFactory/WebDriverFactory.cs' (Unknown (FileSystemError): Error: EROFS: read-only file system, mkdir '/out')
I could use the message above to use fix my paths in sourceFileMap:
{
"version": "0.2.0",
"configurations": [
{
"name": "Docker .NET Core Attach (Preview)",
"type": "docker",
"request": "attach",
"platform": "netCore",
"sourceFileMap": {
"/out/src/": "${workspaceFolder}/src"
}
}
]
}
I used .NET 6 with Alpine images:
mcr.microsoft.com/dotnet/runtime:6.0-alpine3.14-amd64

yarn install inside docker image with yarn workspaces

I am using yarn workspaces and I have this packages in my package.json:
"workspaces": ["packages/*"]
I am trying to create a docker image to deploy and I have the following Dockerfile:
# production dockerfile
FROM node:9.2
# add code
COPY ./packages/website/dist /cutting
WORKDIR /cutting
COPY package.json /cutting/
RUN yarn install --pure-lockfile && yarn cache clean --production
CMD npm run serve
But I get the following error:
error An unexpected error occurred:
"https://registry.yarnpkg.com/#cutting%2futil: Not found"
#cutting/util is the name of one of my workspace packages.
So the problem is that there is no source code in the docker image so it is trying to install it from yarnpkg.
what is the best way to handle workspaces when deploying to a docker image.
This code won't work outside of the docker vm, so it will refuse in the docker, too.
The problem is you have built a code, and copy the bundled code. The yarn workspaces is looking for a package.json that you don't have in the dist folder. The workspaces is just creating a link in a common node_modules folder to the other workspace that you are using. The source code is needed there. (BTW why don't you build code inside the docker vm? That way source code and dist would also be available.)
Here is my dockerfile. I use yarn workspaces and lerna, but without lerna should be similar. You want to build your shared libraries and then test the build works locally by running your code in your dist folder.
###############################################################################
# Step 1 : Builder image
FROM node:11 AS builder
WORKDIR /usr/src/app
ENV NODE_ENV production
RUN npm i -g yarn
RUN npm i -g lerna
COPY ./lerna.json .
COPY ./package* ./
COPY ./yarn* ./
COPY ./.env .
COPY ./packages/shared/ ./packages/shared
COPY ./packages/api/ ./packages/api
# Install dependencies and build whatever you have to build
RUN yarn install --production
RUN lerna bootstrap
RUN cd /usr/src/app/packages/shared && yarn build
RUN cd /usr/src/app/packages/api && yarn build
###############################################################################
# Step 2 : Run image
FROM node:11
LABEL maintainer="Richard T"
LABEL version="1.0"
LABEL description="This is our dist docker image"
RUN npm i -g yarn
RUN npm i -g lerna
ENV NODE_ENV production
ENV NPM_CONFIG_LOGLEVEL error
ARG PORT=3001
ENV PORT $PORT
WORKDIR /usr/src/app
COPY ./package* ./
COPY ./lerna.json ./
COPY ./.env ./
COPY ./yarn* ./
COPY --from=builder /usr/src/app/packages/shared ./packages/shared
COPY ./packages/api/package* ./packages/api/
COPY ./packages/api/.env* ./packages/api/
COPY --from=builder /usr/src/app/packages/api ./packages/api
RUN yarn install
CMD cd ./packages/api && yarn start-production
EXPOSE $PORT
###############################################################################

Resources