Dockerfile is caching an old version of a generated file - docker

I'm working on a Dockerfile with a multi-stage build. The general idea is to build the binary for the backend, build the javascript bundle for the frontend, and then put these two things in a final container for the app.
Here's the docker file:
# go binary
FROM golang:alpine as build-go
RUN apk --no-cache add git bzr mercurial
ENV D=/go/src/github.com/tamuhack-org/quack
RUN go get -d -v golang.org/x/net/html
RUN go get -d -v github.com/gorilla/handlers
RUN go get -d -v github.com/gorilla/mux
COPY ./main.go $D/main.go
COPY ./frontend/dist $D/frontend/dist
RUN rm -rf $D/frontend/dist/index.html
RUN rm -rf $D/frontend/dist/index.js
RUN cd $D && go build -o main && cp main /tmp/
# ui
FROM node:alpine AS build-node
RUN mkdir -p /src/ui
COPY ./frontend/package.json /src/ui/
RUN cd /src/ui && yarn install
COPY ./frontend /src/ui
# Replace the dev instance of index.html with the prod version.
RUN rm -rf /src/ui/dist/index.html
RUN mv /src/ui/dist/index-prod.html /src/ui/dist/index.html
RUN cd /src/ui && yarn build
# final
FROM alpine
RUN apk --no-cache add ca-certificates
WORKDIR /app/server/
COPY --from=build-go /tmp/main /app/server/
COPY --from=build-node /src/ui/dist /app/server/frontend/dist
EXPOSE 8080
CMD ["./main"]
What I've noticed is that when I update the frontend source code and build the docker container, the new version of the container doesn't update with the new bundle. Are there any obvious errors in the Dockerfile that may be the reason for why I'm not seeing any file changes? If I run yarn build locally, the bundle is accurate, but the docker container seems to be caching an older version. Thoughts?

Related

Dockerfile for Meteor 2.2 proyect

I have been trying for almost 3 weeks to build and run a meteor app (bundle) using Docker, I have tried all the majors recommendations in Meteor forums, Stackoverflow and official documentarion with no success, first I tried to make the bundle and put inside the docker but have awful results, then I realized what I need to do is to make the bundle inside the docker and use a multistage Dockerfile, here is the one I am using right now:
FROM chneau/meteor:alpine as meteor
USER 0
RUN mkdir -p /build /app
RUN chown 1000 -R /build /app
WORKDIR /app
COPY --chown=1000:1000 ./app .
COPY --chown=1000:1000 ./app/packages.json ./packages/
RUN rm -rf node_modules/*
RUN rm -rf .meteor/local/*
USER 1000
RUN meteor update --packages-only
RUN meteor npm ci
RUN meteor build --architecture=os.linux.x86_64 --directory /build
FROM node:lts-alpine as mid
USER 0
RUN apk add --no-cache python make g++
RUN mkdir -p /app
RUN chown 1000 -R /app
WORKDIR /app
COPY --chown=1000:1000 --from=meteor /build/bundle .
USER 1000
WORKDIR /app/programs/server
RUN rm -rf node_modules
RUN rm -f package-lock.json
RUN npm i
FROM node:lts-alpine
USER 0
ENV TZ=America/Santiago
RUN apk add -U --no-cache tzdata && cp /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir -p /app
RUN chown 1000 -R /app
WORKDIR /app
COPY --chown=1000:1000 --from=mid /app .
USER 1000
ENV LC_ALL=C.UTF-8
ENV ROOT_URL=http://localhost/
ENV MONGO_URL=mongodb://locahost:21027/meteor
ENV PORT=3000
EXPOSE 3000
ENTRYPOINT [ "/usr/local/bin/node", "/app/main.js" ]
if I build with docker build -t my-image:v1 . then run my app with docker run -d --env-file .dockerenv --publish 0.0.0.0:3000:3000 --name my-bundle my-image:v1 it exposes port:3000 but when I try to navigate with my browser to http://127.0.0.1:3000 it redirects to https://localhost, if i do docker exec -u 0 -it my-bundle sh, then apk add curl, then curl 127.0.0.1:3000 I can see the meteor app running inside docker.
Has anyone had this issue before, maybe I am missing some configuration? my bundle also works fine outside docker with node main.js in the bundle folder and can visit http://127.0.0.1:3000 with my browser.
Thanks in advance

Why do I get "curl: not found" inside my node:alpine Docker container?

My api-server Dockerfile is following
FROM node:alpine
WORKDIR /src
COPY . .
RUN rm -rf /src/node_modules
RUN rm -rf /src/package-lock.json
RUN yarn install
CMD yarn start:dev
After docker-compose up -d
I tried
$ docker exec -it api-server sh
/src # curl 'http://localhost:3000/'
sh: curl: not found
Why is the command curl not found?
My host is Mac OS X.
node:alpine image doesn't come with curl. You need to add the installation instruction to your Dockerfile.
RUN apk --no-cache add curl
Full example from your Dockerfile would be:
FROM node:alpine
WORKDIR /src
COPY . .
RUN rm -rf /src/node_modules
RUN rm -rf /src/package-lock.json
RUN apk --no-cache add curl
RUN yarn install
CMD yarn start:dev

Monolith docker application with webpack

I am running my monolith application in a docker container and k8s on GKE.
The application contains python & node dependencies also webpack for front end bundle.
We have implemented CI/CD which is taking around 5-6 min to build & deploy new version to k8s cluster.
Main goal is to reduce the build time as much possible. Written Dockerfile is multi stage.
Webpack is taking more time to generate the bundle.To buid docker image i am using already high config worker.
To reduce time i tried using the Kaniko builder.
Issue :
As docker cache layers for python code it's working perfectly. But when there is any changes in JS or CSS file we have to generate bundle.
When there is any changes in JS & CSS file instead if generate new bundle its use caching layer.
Is there any way to separate out build new bundle or use cache by passing some value to docker file.
Here is my docker file :
FROM python:3.5 AS python-build
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
ADD . /app
FROM node:10-alpine AS node-build
WORKDIR /app
COPY --from=python-build ./app/app/static/package.json app/static/
COPY --from=python-build ./app ./
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass && npm run sass && npm run build
FROM python:3.5-slim
COPY --from=python-build /root/.cache /root/.cache
WORKDIR /app
COPY --from=node-build ./app ./
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
I would suggest to create separate build pipelines for your docker images, where you know that the requirements for npm and pip aren't so frequent.
This will incredibly improve the speed, reducing the time of access to npm and pip registries.
Use a private docker registry (the official one or something like VMWare harbor or SonaType Nexus OSS).
You store those build images on your registry and use them whenever something on the project changes.
Something like this:
First Docker Builder // python-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t python-builder:YOUR_TAG -f Dockerfile.python.build .
FROM python:3.5
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
Second Docker Builder // js-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t js-builder:YOUR_TAG -f Dockerfile.js.build .
FROM node:10-alpine
WORKDIR /app
COPY app/static/package.json /app/app/static
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass
Your Application Multi-stage build:
docker build --no-cache -t app_delivery:YOUR_TAG -f Dockerfile.app .
FROM python-builder:YOUR_TAG as python-build
# Nothing, already "stoned" in another build process
FROM js-builder:YOUR_TAG AS node-build
ADD ##### YOUR JS/CSS files only here, required from npm! ###
RUN npm run sass && npm run build
FROM python:3.5-slim
COPY . /app # your original clean app
COPY --from=python-build #### only the files installed with the pip command
WORKDIR /app
COPY --from=node-build ##### Only the generated files from npm here! ###
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
A question is: why do you install curl and execute again the pip install -r requirements.txt command in the final docker image?
Triggering every time an apt-get update and install without cleaning the apt cache /var/cache/apt folder produces a bigger image.
As suggestion, use the docker build command with the option --no-cache to avoid caching result:
docker build --no-cache -t your_image:your_tag -f your_dockerfile .
Remarks:
You'll have 3 separate Dockerfiles, as I listed above.
Build the Docker images 1 and 2 only if you change your python-pip and node-npm requirements, otherwise keep them fixed for your project.
If any dependency requirement changes, then update the docker image involved and then the multistage one to point to the latest built image.
You should always build only the source code of your project (CSS, JS, python). In this way, you have also guaranteed reproducible builds.
To optimize your environment and copy files across the multi-stage builders, try to use virtualenv for python build.

Gradle installation through docker file

I wrote a docker file with gradle installations inside it. It shows Gradle version with gradle -v command but while I am running through jenkins job with gradle -v command in execute shell while building a job it shows as gradle:not found
Please check the image mentioned
This is gradle installation in docker file
#Install gradle
RUN cd /usr/lib \
&& wget https://downloads.gradle.org/distributions/gradle-3.4.1-bin.zip -o gradle-bin.zip \
&& unzip "gradle-3.4.1-bin.zip" \
&& ln -s "/usr/gradle-3.4.1/bin/gradle" /usr/bin/gradle \
&& rm "gradle-bin.zip"
#Env set up
ENV GRADLE_HOME=usr/lib/gradle-3.4.1
#ENV PATH=$PATH:$GRADLE_HOME/bin:$PATH
ENV PATH=$PATH:$GRADLE_HOME/bin JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
Try this, work for me.
# Start with a base image containing Java runtime
FROM openjdk:8-jdk-alpine
# Add Maintainer Info
# Add a volume pointing to /tmp
VOLUME /tmp
# Make port 8080 available to the world outside this container
EXPOSE 8080
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN ./gradlew build
ENTRYPOINT ["java","-jar","./build/libs/app-0.1.0.jar"]

Dockerfile: Permission denied during build when running ssh-agent on /tmp

So I'm trying to create an image, which adds a SSH private key to /tmp, runs ssh-agent on it, does a git clone and then deletes the key again.
This is the idea I'm trying to accomplish
Dockerfile:
FROM node:4.2.4
MAINTAINER Me
CMD ["/bin/bash"]
ENV GIT_SSL_NO_VERIFY=1
ENV https_proxy="httpsproxy"
ENV http_proxy="httpproxy"
ENV no_proxy="exceptions"
ADD projectfolder/key /tmp/
RUN ssh-agent /tmp
WORKDIR /usr/src/app
RUN git clone git#gitlab.private.address:something/target.git
RUN rm /tmp/key
WORKDIR /usr/src/app/target
RUN npm install
EXPOSE 3001
Now the problem lies within the build-process. I use the following command to build:
docker build -t samprog/targetimage:4.2.4 -f projectfolder/dockerfile .
The layers up to "ADD projectfolder/key /tmp/" work just fine, though the "RUN ssh-agent /tmp" layer doesn't want to cooperate.
Error code:
Step 9 : RUN ssh-agent /tmp/temp
---> Running in d2ed7c8870ae
/tmp: Permission denied
The command '/bin/sh -c ssh-agent /tmp' returned a non-zero code: 1
Any ideas? Since I thought it was a permission issue, where the directory was already created by the parent image, I created a /tmp/temp and put the key in there. Doesn't work either, same error.
I'm using Docker version 1.10.3 on SLES12 SP1
I did it. What I did is, I got rid of ssh-agent. I simply copied the ~/.ssh- directory of my docker-host into the /root/.ssh of the image and it worked.
Do not use the ~ though, copy the ~/.ssh-directory inside the projectfolder first and then with the dockerfile inside the container.
Final dockerfile looked as follows:
FROM node:4.2.4
MAINTAINER me
CMD["/bin/bash"]
ENV GIT_SSL_NO_VERIFY=1
ENV https_proxy="httpsproxy"
ENV http_proxy="httpproxy"
ENV no_proxy="exceptions"
ADD projectfolder/.ssh /root/.ssh
WORKDIR /usr/src/app
RUN git clone git#gitlab.private.address:something/target.git
RUN rm -r /root/.ssh
WORKDIR /urs/src/app/target
RUN npm set registry http://local-npm-registry
RUN npm install
EXPOSE 3001
The dockerfile still has to be improved on efficiency and stuff, but it works! Eureka!
The image now has to be squashed and it should be safe to use, though we only use it in our local registry.
I have faced with the same problem with maven:3-alpine. It was solved when I properly installed openssh-client:
RUN apk --update add openssh-client
Then copied keys with known hosts to the image:
ADD id_rsa /root/.ssh/
ADD id_rsa.pub /root/.ssh/
ADD known_hosts /root/.ssh/
And ran git clone command inline (with ssh-agent and ssh-add):
RUN eval $(ssh-agent -s) \
&& ssh-add \
&& git clone ssh://git#private.address:port/project/project.git
Complete docker file:
FROM maven:3-alpine
RUN apk update
RUN apk add python
RUN apk add ansible
RUN apk add git
RUN apk --update add openssh-client
ADD id_rsa /root/.ssh/
ADD id_rsa.pub /root/.ssh/
ADD known_hosts /root/.ssh/
RUN eval $(ssh-agent -s) \
&& ssh-add \
&& git clone ssh://git#private.address:port/project/project.git
ADD hosts /etc/ansible/hosts
RUN ansible all -m ping --ask-pass
I had the same issue while executing any bash command when building my Dockerfile.
I solved by adding RUN chmod -R 777 ./ like suggested in the answer of this question. I think this is a workaround, I'm not sure why docker in ubuntu has permission issues when building a container.

Resources