I am trying to create a docker or docker-compose file for a react project that has node-sass.
I have tried this and almost all solution on here but none of them is working.
FROM node:10.17.0-alpine
RUN apk add --no-cache build-base g++ make python
WORKDIR /app
COPY ./ ./
RUN npm install
// This is where node-sass is failing
CMD ["sh"]
The issue is that you are using the alpine version which does not come with node-sass. Use the full node image.
Related
docker can't find file develop.sh even though it's in the root directory
my Dockerfile:
FROM node:16.13.0
WORKDIR /app/medusa
COPY package.json .
COPY develop.sh .
COPY yarn.* .
RUN apt-get update
RUN apt-get install -y python
RUN npm install -g npm#latest
RUN npm install -g #medusajs/medusa-cli#latest
RUN npm install
COPY . .
ENTRYPOINT ["./develop.sh"]
Edit: I am trying to run an open source project called medusa, you can find the code here, I haven't changed any thing except node version in Dockerfile
as per #Charles Duffy suggestion: changing the entrypoint to ENTRYPOINT ["/bin/sh", "./develop.sh"] solved the issue
I'm using pnpm in Dockerfile I have one dependency which is installed from GitHub.
PNPM by default use yarn to install dependency from Git.
Problem with PNPM is it is not able to access the yarn, I think some kind of permission problem.
ERROR:
ERR_PNPM_PREPARE_PKG_FAILURE Command failed with exit code 1: /usr/local/bin/yarn install
The command '/bin/sh -c pnpm install' returned a non-zero code: 1
Here is my Dockerfile
FROM node:alpine
RUN npm install -g pnpm
WORKDIR /app
COPY ["package.json", "pnpm-lock.yaml", "./"]
RUN pnpm install
COPY . .
RUN pnpm build
ENV PORT=8080
EXPOSE 80
CMD [ "node", "./build/index.js" ]
Update
This is repo that is used from GitHub. Baileys
Everything works perfect when I try to install packages without Dockerfile If I run pnpm install everything just works. But When I run the build command for Dockerfile it will create problem.
docker build -t name .
As you stated, pnpm uses yarn to install dependencies from Git. From your output, you can see that yarn failed. If you run inside Docker container yarn add https://github.com/adiwajshing/Baileys.git, it would output:
info No lockfile found.
[1/4] Resolving packages...
error Couldn't find the binary git
node:alpine image is missing git.
To resolve your problem, simply install git before pnpm install in Dockerfile:
FROM node:alpine
RUN apk add --no-cache git
RUN npm install -g pnpm
...
I am creating a container based on the ruby:2.6-alpine image and trying to add yarn. When I check the yarn version, I get 1.16 while I want something more recent (1.17 specifically.)
What do I have to do get the latest version of Yarn on an alpine image?
My Dockerfile is
FROM ruby:2.6-alpine
RUN apk update && apk add build-base nodejs postgresql-dev bash yarn curl git
RUN mkdir /app
WORKDIR /app
COPY . .
CMD bash
Your base image ruby:2.6-alpine is based on alpine v3.10 repository so that is why you get yarn 1.16.
alpine yarn branch v3.10
All you need to install it from this repo.
RUN apk add --no-cache yarn --repository="http://dl-cdn.alpinelinux.org/alpine/edge/community"
RUN yarn -v
I am running my monolith application in a docker container and k8s on GKE.
The application contains python & node dependencies also webpack for front end bundle.
We have implemented CI/CD which is taking around 5-6 min to build & deploy new version to k8s cluster.
Main goal is to reduce the build time as much possible. Written Dockerfile is multi stage.
Webpack is taking more time to generate the bundle.To buid docker image i am using already high config worker.
To reduce time i tried using the Kaniko builder.
Issue :
As docker cache layers for python code it's working perfectly. But when there is any changes in JS or CSS file we have to generate bundle.
When there is any changes in JS & CSS file instead if generate new bundle its use caching layer.
Is there any way to separate out build new bundle or use cache by passing some value to docker file.
Here is my docker file :
FROM python:3.5 AS python-build
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
ADD . /app
FROM node:10-alpine AS node-build
WORKDIR /app
COPY --from=python-build ./app/app/static/package.json app/static/
COPY --from=python-build ./app ./
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass && npm run sass && npm run build
FROM python:3.5-slim
COPY --from=python-build /root/.cache /root/.cache
WORKDIR /app
COPY --from=node-build ./app ./
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
I would suggest to create separate build pipelines for your docker images, where you know that the requirements for npm and pip aren't so frequent.
This will incredibly improve the speed, reducing the time of access to npm and pip registries.
Use a private docker registry (the official one or something like VMWare harbor or SonaType Nexus OSS).
You store those build images on your registry and use them whenever something on the project changes.
Something like this:
First Docker Builder // python-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t python-builder:YOUR_TAG -f Dockerfile.python.build .
FROM python:3.5
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
Second Docker Builder // js-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t js-builder:YOUR_TAG -f Dockerfile.js.build .
FROM node:10-alpine
WORKDIR /app
COPY app/static/package.json /app/app/static
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass
Your Application Multi-stage build:
docker build --no-cache -t app_delivery:YOUR_TAG -f Dockerfile.app .
FROM python-builder:YOUR_TAG as python-build
# Nothing, already "stoned" in another build process
FROM js-builder:YOUR_TAG AS node-build
ADD ##### YOUR JS/CSS files only here, required from npm! ###
RUN npm run sass && npm run build
FROM python:3.5-slim
COPY . /app # your original clean app
COPY --from=python-build #### only the files installed with the pip command
WORKDIR /app
COPY --from=node-build ##### Only the generated files from npm here! ###
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
A question is: why do you install curl and execute again the pip install -r requirements.txt command in the final docker image?
Triggering every time an apt-get update and install without cleaning the apt cache /var/cache/apt folder produces a bigger image.
As suggestion, use the docker build command with the option --no-cache to avoid caching result:
docker build --no-cache -t your_image:your_tag -f your_dockerfile .
Remarks:
You'll have 3 separate Dockerfiles, as I listed above.
Build the Docker images 1 and 2 only if you change your python-pip and node-npm requirements, otherwise keep them fixed for your project.
If any dependency requirement changes, then update the docker image involved and then the multistage one to point to the latest built image.
You should always build only the source code of your project (CSS, JS, python). In this way, you have also guaranteed reproducible builds.
To optimize your environment and copy files across the multi-stage builders, try to use virtualenv for python build.
FROM golang:1.8
RUN apt-get -y update && apt-get install -y curl
RUN go get -u github.com/gorilla/mux
RUN go get github.com/mattn/go-sqlite3
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - && \
apt-get install -y nodejs
COPY . /go/src/beginnerapp
WORKDIR ./src/beginnerapp/beginner-app-react
RUN npm run build
RUN go install beginnerapp/
WORKDIR /go/src/beginnerapp/beginner-app-react
VOLUME /go/src/beginnerapp/local-db
WORKDIR /go/src/beginnerapp
ENTRYPOINT /go/bin/beginnerapp
EXPOSE 8080
At the start, the golang project as well as the reactjs code don't exist on the image and need to be copied over before being able to build (js) / install (golang). Is there a way I can do that build/install process before copying files over to the image? Ideally I'd only need to copy over the golang executable and reactjs production build.
Yes this is possible now using multi stage builds. The idea is that you can have multiple FROM in your docker file and your main image will be built using the last FROM. Below is a sample pseudo structure
FROM node:latest as reactbuild
WORKDIR /app
COPY . .
RUN webpack build
FROM golang:latest as gobuild
WORKDIR /app
COPY . .
RUN go build
FROM alpine
WORKDIR /app
COPY --from=gobuild /app/myapp /app/myapp
COPY --from=reactbuild /app/dist /app/dist
Please read below article for more details
https://docs.docker.com/engine/userguide/eng-image/multistage-build/