How to yarn build on Dockerfile - docker

I try to build a Docker image executing the code:
docker build . -t <YOUR_DOCKER_HUB_USERNAME>/my-nuxt-project
It is about a nuxt.js project but when I run the code I receive the following error:
Step 5/13 : RUN yarn build
---> Running in 4dd3684952ba
yarn run v1.22.19
$ nuxt build
ℹ Production build
ℹ Bundling for server and client side
ℹ Target: server
ℹ Using components loader to optimize imports
ℹ Discovered Components: .nuxt/components/readme.md
✔ Builder initialized
✔ Nuxt files generated
ℹ Compiling Client
node:internal/crypto/hash:71
this[kHandle] = new _Hash(algorithm, xofLen);
^
Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:71:19)
at Object.createHash (node:crypto:133:10)
at module.exports (/app/node_modules/webpack/lib/util/createHash.js:135:53)
at NormalModule._initBuildHash (/app/node_modules/webpack/lib/NormalModule.js:417:16)
at handleParseError (/app/node_modules/webpack/lib/NormalModule.js:471:10)
at /app/node_modules/webpack/lib/NormalModule.js:503:5
at /app/node_modules/webpack/lib/NormalModule.js:358:12
at /app/node_modules/loader-runner/lib/LoaderRunner.js:373:3
at iterateNormalLoaders (/app/node_modules/loader-runner/lib/LoaderRunner.js:214:10)
at Array.<anonymous> (/app/node_modules/loader-runner/lib/LoaderRunner.js:205:4)
at Storage.finished (/app/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:55:16)
at /app/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:91:9
at /app/node_modules/graceful-fs/graceful-fs.js:123:16
at FSReqCallback.readFileAfterClose [as oncomplete] (node:internal/fs/read_file_context:68:3) {
opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ],
library: 'digital envelope routines',
reason: 'unsupported',
code: 'ERR_OSSL_EVP_UNSUPPORTED'
}
Node.js v18.12.0
error Command failed with exit code 1.
This is the Dockerfile that I am using. I am following the nuxt documentation to build the image.
FROM node:lts as builder
WORKDIR /app
COPY . .
RUN yarn install \
--prefer-offline \
--frozen-lockfile \
--non-interactive \
--production=false
RUN yarn build
RUN rm -rf node_modules && \
NODE_ENV=production yarn install \
--prefer-offline \
--pure-lockfile \
--non-interactive \
--production=true
FROM node:lts
WORKDIR /app
COPY --from=builder /app .
ENV HOST 0.0.0.0
EXPOSE 3000
CMD [ "yarn", "start" ]
Anyone knows how to debug it?

Try adding NODE_OPTIONS=--openssl-legacy-provider as docker environment variable. So your Dockerfile should be like this. Try rebuilding your Dockerfile after this change.
FROM node:lts as builder
WORKDIR /app
ENV NODE_OPTIONS=--openssl-legacy-provider
COPY . .
RUN yarn install \
--prefer-offline \
--frozen-lockfile \
--non-interactive \
--production=false
RUN yarn build
RUN rm -rf node_modules && \
NODE_ENV=production yarn install \
--prefer-offline \
--pure-lockfile \
--non-interactive \
--production=true
FROM node:lts
WORKDIR /app
COPY --from=builder /app .
ENV HOST 0.0.0.0
EXPOSE 3000
CMD [ "yarn", "start" ]

Related

Next.js build in Dockerfile failing because of missing .next/cache folder

I'm experiencing an strange situation where the same Dockerfile and CodeBuild project config is failing prod but not in dev.
FROM public.ecr.aws/docker/library/node:14-alpine AS deps
WORKDIR /app
COPY package*.json yarn.lock ./
RUN yarn install --production=false --pure-lockfile --ignore-engines
FROM public.ecr.aws/docker/library/node:14-alpine AS builder
WORKDIR /app
RUN apk add --no-cache libc6-compat curl
COPY . .
COPY --from=deps /app/package.json .
COPY --from=deps /app/node_modules ./node_modules
RUN DEBUG=graphql:errors,graphql:queries,cache:redis DEBUG_COLORS=true npm run build --debug
RUN addgroup --system --gid 1001 app_user \
&& adduser --system --uid 1001 --shell /bin/bash -D app_user \
&& chown -R app_user:app_user /app \
&& echo "export PATH=$PATH:/usr/bin/curl" >> /etc/profile
USER app_user
HEALTHCHECK --interval=30s --timeout=5s --retries=3 CMD curl -f http://localhost:8080/blog || exit 1
EXPOSE 8080 6379
ENTRYPOINT [ "npm", "start"]
I'm getting the error: [Error: ENOENT: no such file or directory, stat '/app/.next/cache']
In detail this is the error output:
Build error occurred
[Error: ENOENT: no such file or directory, stat '/app/.next/cache'] {
errno: -2,
code: 'ENOENT',
syscall: 'stat',
path: '/app/.next/cache'
}
npm ERR! code ELIFECYCLE
I've thought about permissions issue, but by the time build command runs, root user is still in action.
By default, if a directory isn't present, Docker doesn't create it automatically. So you need to create it and give app_user write permission to it.
# COPY commands
RUN mkdir -p .next/cache && chown -R app_user .next/cache && chmod -R 644 .next/cache
# Debug command

Docker build is using cache for COPY command even if my files have changed

I have a Dockerfile that is as follow:
FROM node:14-alpine as frontend-builder
WORKDIR /app/frontend
COPY ./frontend .
ENV PATH ./node_modules/.bin/:$PATH
RUN set -ex; \
yarn install --frozen-lockfile --production; \
yarn cache clean; \
yarn run build
CMD ["tail", "-f", "/dev/null"]
I have changed one file in frontend folder and re-run the build and docker is using the cache...I know i can force to build with --no-cache but how can i tweak docker so it detects changes in my files instead of no-cache option ?

How to extract coverage report in multistage build?

I want to extract the coverage report while building a docker image in a multistage build. Before I was executing the tests via image.inside using the Jenkins Docker plugin but now I am executing the tests using the following command where I could not extract the coverage report.
docker build -t myapp:test --cache-from registry/myapp:test --target test --build-arg BUILDKIT_INLINE_CACHE=1 .
Is there any way to mount the Jenkins workspace like the below function is doing without running the docker image? There is a --output flag but I could not understand how can I use this if it works. Or can it be possible via RUN --mount=type ...
image.inside('-u root -v $WORKSPACE/coverage:/var/app/coverage') {
stage("Running Tests") {
timeout(10) {
withEnv(["NODE_ENV=production"]) {
sh(script: "cd /var/app; yarn run test:ci")
}
Dockerfile
FROM node:16.15.0-alpine3.15 as base
WORKDIR /var/app
RUN --mount=type=cache,target=/var/cache/apk \
apk add --update --virtual build-dependencies build-base \
curl \
python3 \
make \
g++ \
bash
COPY package*.json ./
COPY yarn.lock ./
COPY .solidarity ./
RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn && \
yarn install --no-progress --frozen-lockfile --check-files && \
yarn cache clean
COPY . .
FROM base as test
ENV NODE_ENV=production
RUN ["yarn", "run", "format:ci"]
RUN ["yarn", "run", "lint:ci"]
RUN ["yarn", "run", "test:ci"]
FROM base as builder
RUN yarn build
FROM node:16.15.0-alpine3.15 as production
WORKDIR /var/app
COPY --from=builder /var/app /var/app
CMD ["yarn", "start:envconsul"]
You can make a stage with the output you want to extract:
FROM node:16.15.0-alpine3.15 as base
WORKDIR /var/app
RUN --mount=type=cache,target=/var/cache/apk \
apk add --update --virtual build-dependencies build-base \
curl \
python3 \
make \
g++ \
bash
COPY package*.json ./
COPY yarn.lock ./
COPY .solidarity ./
RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn && \
yarn install --no-progress --frozen-lockfile --check-files && \
yarn cache clean
COPY . .
FROM base as test
ENV NODE_ENV=production
RUN ["yarn", "run", "format:ci"]
RUN ["yarn", "run", "lint:ci"]
RUN ["yarn", "run", "test:ci"]
FROM scratch as test-out
COPY --from=test /var/app/coverage/ /
FROM base as builder
RUN yarn build
FROM node:16.15.0-alpine3.15 as production
WORKDIR /var/app
COPY --from=builder /var/app /var/app
CMD ["yarn", "start:envconsul"]
Then you can build with:
docker build \
--output "type=local,dest=${WORKSPACE}/coverage" \
--target test-out .

Build NextJS Docker image with nginx server

I am new to docker and trying to learn it by it's documentation. AS i need to create a NextJS build using docker image for nginx server i have followed the below process.
Install the nginx
Seeding the port 80 to 3000 in the default config.
Symlink the out directory to base nginx directory
CMD to take care the production build and symlinking of the out directory.
FROM node:alpine AS deps
RUN apk add --no-cache libc6-compat git
RUN apt-get install nginx -y
WORKDIR /sample-app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
FROM node:alpine AS builder
WORKDIR /sample-app
COPY . .
COPY --from=deps /sample-app/node_modules ./node_modules
RUN yarn build
FROM node:alpine AS runner
WORKDIR /sample-app
ENV NODE_ENV production
RUN ls -SF /sample-app/out /usr/share/nginx/html
RUN -p 3000:80 -v /sample-app/out:/usr/share/nginx/html:ro -d nginx
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
RUN chown -R nextjs:nodejs /sample-app/out
USER nextjs
CMD ["nginx -g daemon=off"]
While running the docker build shell script command as sudo docker build . -t sample-app it throws the error The command '/bin/sh -c apt-get install nginx -y' returned a non-zero code: 127
I do not have much experience with alpine images, but I think that you have to use apk (Alpine Package Keeper) for installing packages
try apk add nginx instead of apt-get install nginx -y

Convert from multi stage build to single

As i'm limited to use docker 1.xxx instead of 17x on my cluster, I need some help on how to convert this multi stage build to a valid build for the older docker version.
Could someone help me?
FROM node:9-alpine as deps
ENV NODE_ENV=development
RUN apk update && apk upgrade && \
apk add --no-cache bash
WORKDIR /app
COPY . .
RUN npm set progress=false \
&& npm config set depth 0 \
&& npm install --only=production \
&& cp -R node_modules/ ./prod_node_modules \
&& npm install
FROM deps as test
RUN rm -r ./prod_node_modules \
&& npm run lint
FROM node:9-alpine
RUN apk add --update tzdata
ENV PORT=3000
ENV NODE_ENV=production
WORKDIR /root/
COPY --from=deps /app .
COPY --from=deps /app/prod_node_modules ./node_modules
EXPOSE 3000
CMD ["node", "index.js"]
Currently it gives me error on "FROM node:9-alpine as deps"
"FROM node:9-alpine as deps" means you are defining an intermediate image that you will be able to COPY from COPY --from=deps.
Having a single image means you don't need to COPY --from anymore, and you don't need "as deps" since everything happens in the same image (which will be bigger as a result)
So:
FROM node:9-alpine
ENV NODE_ENV=development
RUN apk update && apk upgrade && \
apk add --no-cache bash
WORKDIR /app
COPY . .
RUN npm set progress=false \
&& npm config set depth 0 \
&& npm install --only=production \
&& cp -R node_modules/ ./prod_node_modules \
&& npm install
RUN rm -r ./prod_node_modules \
&& npm run lint
RUN apk add --update tzdata
ENV PORT=3000
ENV NODE_ENV=production
WORKDIR /root/
RUN cp -r /app .
RUN cp -r /app/prod_node_modules ./node_modules
EXPOSE 3000
CMD ["node", "index.js"]
Only one FROM here.

Resources