I'm using pnpm in Dockerfile I have one dependency which is installed from GitHub.
PNPM by default use yarn to install dependency from Git.
Problem with PNPM is it is not able to access the yarn, I think some kind of permission problem.
ERROR:
ERR_PNPM_PREPARE_PKG_FAILURE Command failed with exit code 1: /usr/local/bin/yarn install
The command '/bin/sh -c pnpm install' returned a non-zero code: 1
Here is my Dockerfile
FROM node:alpine
RUN npm install -g pnpm
WORKDIR /app
COPY ["package.json", "pnpm-lock.yaml", "./"]
RUN pnpm install
COPY . .
RUN pnpm build
ENV PORT=8080
EXPOSE 80
CMD [ "node", "./build/index.js" ]
Update
This is repo that is used from GitHub. Baileys
Everything works perfect when I try to install packages without Dockerfile If I run pnpm install everything just works. But When I run the build command for Dockerfile it will create problem.
docker build -t name .
As you stated, pnpm uses yarn to install dependencies from Git. From your output, you can see that yarn failed. If you run inside Docker container yarn add https://github.com/adiwajshing/Baileys.git, it would output:
info No lockfile found.
[1/4] Resolving packages...
error Couldn't find the binary git
node:alpine image is missing git.
To resolve your problem, simply install git before pnpm install in Dockerfile:
FROM node:alpine
RUN apk add --no-cache git
RUN npm install -g pnpm
...
Related
docker can't find file develop.sh even though it's in the root directory
my Dockerfile:
FROM node:16.13.0
WORKDIR /app/medusa
COPY package.json .
COPY develop.sh .
COPY yarn.* .
RUN apt-get update
RUN apt-get install -y python
RUN npm install -g npm#latest
RUN npm install -g #medusajs/medusa-cli#latest
RUN npm install
COPY . .
ENTRYPOINT ["./develop.sh"]
Edit: I am trying to run an open source project called medusa, you can find the code here, I haven't changed any thing except node version in Dockerfile
as per #Charles Duffy suggestion: changing the entrypoint to ENTRYPOINT ["/bin/sh", "./develop.sh"] solved the issue
I have a react project which is built inside docker with the next config:
FROM alpine:3.14 as build
RUN apk add --update nodejs npm
RUN npm install -g yarn
RUN yarn set version berry
RUN yarn set version latest
RUN npm config set unsafe-perm true
RUN apk add --no-cache autoconf automake g++ git libtool libpng-dev make nasm python3 py3-pip
ARG NODE_ENV
ARG APP_URL
ARG ENV_TYPE
ENV HOME=/home/app
ENV NODE_ENV=${NODE_ENV}
ENV ENV_TYPE=${ENV_TYPE}
ENV APP_URL=${APP_URL}
ENV API_URL=${APP_URL}
ENV CHAT_URL=${APP_URL}
COPY . $HOME
WORKDIR $HOME
RUN yarn cache clean
RUN yarn install
RUN yarn run build-stage-env-prod
FROM nginx:1.15.3-alpine as final
COPY --from=build /home/app/build /usr/share/nginx/html
COPY --from=build /home/app/lighthouse.html /usr/share/nginx/html/lighthouse.html
COPY --from=build /home/app/default.conf /etc/nginx/conf.d/default.conf
COPY --from=build /home/app/nginx.conf /etc/nginx/nginx.conf
When I run this on a local machine, everything looks fine, but when it's run by Jenkins it fails with the next error:
(the end of log)
➤ YN0000: Failed with errors in 20s 322ms
The command '/bin/sh -c yarn install' returned a non-zero code: 1
Build step 'Execute shell' marked build as failure
Finished: FAILURE
So, the error happens on the yarn install. How to debug it?
May the "The remote archive doesn't match the expected checksum" in on package be the source of the build fail?
I am trying to create a docker or docker-compose file for a react project that has node-sass.
I have tried this and almost all solution on here but none of them is working.
FROM node:10.17.0-alpine
RUN apk add --no-cache build-base g++ make python
WORKDIR /app
COPY ./ ./
RUN npm install
// This is where node-sass is failing
CMD ["sh"]
The issue is that you are using the alpine version which does not come with node-sass. Use the full node image.
I am running my monolith application in a docker container and k8s on GKE.
The application contains python & node dependencies also webpack for front end bundle.
We have implemented CI/CD which is taking around 5-6 min to build & deploy new version to k8s cluster.
Main goal is to reduce the build time as much possible. Written Dockerfile is multi stage.
Webpack is taking more time to generate the bundle.To buid docker image i am using already high config worker.
To reduce time i tried using the Kaniko builder.
Issue :
As docker cache layers for python code it's working perfectly. But when there is any changes in JS or CSS file we have to generate bundle.
When there is any changes in JS & CSS file instead if generate new bundle its use caching layer.
Is there any way to separate out build new bundle or use cache by passing some value to docker file.
Here is my docker file :
FROM python:3.5 AS python-build
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
ADD . /app
FROM node:10-alpine AS node-build
WORKDIR /app
COPY --from=python-build ./app/app/static/package.json app/static/
COPY --from=python-build ./app ./
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass && npm run sass && npm run build
FROM python:3.5-slim
COPY --from=python-build /root/.cache /root/.cache
WORKDIR /app
COPY --from=node-build ./app ./
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
I would suggest to create separate build pipelines for your docker images, where you know that the requirements for npm and pip aren't so frequent.
This will incredibly improve the speed, reducing the time of access to npm and pip registries.
Use a private docker registry (the official one or something like VMWare harbor or SonaType Nexus OSS).
You store those build images on your registry and use them whenever something on the project changes.
Something like this:
First Docker Builder // python-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t python-builder:YOUR_TAG -f Dockerfile.python.build .
FROM python:3.5
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
Second Docker Builder // js-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t js-builder:YOUR_TAG -f Dockerfile.js.build .
FROM node:10-alpine
WORKDIR /app
COPY app/static/package.json /app/app/static
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass
Your Application Multi-stage build:
docker build --no-cache -t app_delivery:YOUR_TAG -f Dockerfile.app .
FROM python-builder:YOUR_TAG as python-build
# Nothing, already "stoned" in another build process
FROM js-builder:YOUR_TAG AS node-build
ADD ##### YOUR JS/CSS files only here, required from npm! ###
RUN npm run sass && npm run build
FROM python:3.5-slim
COPY . /app # your original clean app
COPY --from=python-build #### only the files installed with the pip command
WORKDIR /app
COPY --from=node-build ##### Only the generated files from npm here! ###
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
A question is: why do you install curl and execute again the pip install -r requirements.txt command in the final docker image?
Triggering every time an apt-get update and install without cleaning the apt cache /var/cache/apt folder produces a bigger image.
As suggestion, use the docker build command with the option --no-cache to avoid caching result:
docker build --no-cache -t your_image:your_tag -f your_dockerfile .
Remarks:
You'll have 3 separate Dockerfiles, as I listed above.
Build the Docker images 1 and 2 only if you change your python-pip and node-npm requirements, otherwise keep them fixed for your project.
If any dependency requirement changes, then update the docker image involved and then the multistage one to point to the latest built image.
You should always build only the source code of your project (CSS, JS, python). In this way, you have also guaranteed reproducible builds.
To optimize your environment and copy files across the multi-stage builders, try to use virtualenv for python build.
I am trying to build a Docker image for my API with the following Dockerfile:
FROM microsoft/dotnet AS build-env
ARG source
RUN echo "source: $source"
WORKDIR /app
RUN apt-get update
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash
RUN apt-get install nodejs
RUN node -v
RUN npm -v
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
#Copy everything else & build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM microsoft/dotnet
WORKDIR /app
COPY --from=build-env /app/out .
EXPOSE 80
ENTRYPOINT ["dotnet", "API_App.dll"]
However, when I run the docker build command, I keep getting the following error:
Unable to locate package nodejs
The command '/bin/sh -c apt-get install nodejs returned a non-zero code: 100
Can someone tell me why I am getting this error?
Node Version: 8.11.3
npm Version: 5.6.0
You may occasionally experience some cache issues when the live repositories you’re pulling data from have changed.
To fix this, modify the Dockerfile to do a cleanup and update of the sources before you install any new packages.
...
# clean and update sources
RUN apt-get clean && apt-get update
...
This answer is from digitalocean-issue