I am having trouble in optimizing my docker build step.
Below is my use case:
In my jenkinsfile I am building 3 docker images.
from *docker/test/Dockerfile*
from *docker/dev/Dockerfile*
stage('Build') {
steps {
sh 'docker build -t Test -f docker/test/Dockerfile .'
sh 'set +x && eval $(/usr/local/bin/aws-login/aws-login.sh $AWS_ACCOUNT jenkins eu-west-2) \
&& docker build -t DEV --build-arg S3_FILE_NAME=environment.dev.ts \
--build-arg CONFIG_S3_BUCKET_URI=s3://bucket \
--build-arg AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN \
--build-arg AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION \
--build-arg AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
--build-arg AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-f docker/dev/Dockerfile .'
sh 'set +x && eval $(/usr/local/bin/aws-login/aws-login.sh $AWS_ACCOUNT jenkins eu-west-2) \
&& docker build -t QA --build-arg S3_FILE_NAME=environment.qa.ts \
--build-arg CONFIG_S3_BUCKET_URI=s3://bucket \
--build-arg AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN \
--build-arg AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION \
--build-arg AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
--build-arg AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-f docker/dev/Dockerfile .'
}
}
stage('Test') {
steps {
sh 'docker run --rm TEST npm run test'
}
}
Below is my two docker file:
docker/test/Dockerfile:
FROM node:lts
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-key update && apt-get update && apt-get install -y google-chrome-stable
COPY . /usr/src/app
RUN npm install
CMD sh ./docker/test/docker-entrypoint.sh
docker/dev/Dockerfile:
FROM node:lts as dev-builder
ARG CONFIG_S3_BUCKET_URI
ARG S3_FILE_NAME
ARG AWS_SESSION_TOKEN
ARG AWS_DEFAULT_REGION
ARG AWS_SECRET_ACCESS_KEY
ARG AWS_ACCESS_KEY_ID
RUN apt-get update
RUN apt-get install python3-dev -y
RUN curl -O https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py
RUN pip3 install awscli --upgrade
RUN mkdir /app
WORKDIR /app
COPY . .
RUN aws s3 cp "$CONFIG_S3_BUCKET_URI/$S3_FILE_NAME" src/environments/environment.dev.ts
RUN cat src/environments/environment.dev.ts
RUN npm install
RUN npm run build-dev
FROM nginx:stable
COPY nginx.conf /etc/nginx/nginx.conf
COPY --from=dev-builder /app/dist/ /usr/share/nginx/html/
Every time it takes 20-25 mins to build the images.
Is there any way I can optimize the docker file for a better build process?
suggestion are welcome. RUN npm run build-dev uses package.json to install the dependencies. which is one on the reason that it install all dependency for every build.
Thanks
You can use combination of base images and multi stage builds to speed up your builds.
Base image with pre-installed packages/dependencies
Stuff like installing python3, pip, google-chrome, awscli etc need not be done every build. These layers might get cached if you are building on single machine but if you have multiple build machines or clean the cache, you will be re-building these layers unnecessarily. You can build a base image which already has these stuff and use this new image as the base for your app.
Multi stage builds
You are copying your source code and then doing npm install. Even if package.json has not changed, the layer will be re-built if any other file in source code might have changed.
You can create a multi stage dockerfile where you just copy the package.json in the first stage and run npm install and other such commands. This layer will be re-built only if package.json is changed.
In your second stage, you can just copy the npm cache from first stage.
FROM node:lts as dev-builder
WORKDIR /cache/
COPY package.json .
RUN npm install
RUN npm run build-dev
FROM NEW_BASE_IMAGE_WITH_CHROME_ETC_DEPENDENCIES
COPY --from=node_cache /cache/ .
COPY . .
<snip>
Identify any other such optimisations that you can make.
Related
I am trying to build a Docker image from a dockerfile command I received from the previous developer:
bash-5.1$ ls
data_collection demo.py examples requirements.txt start.py
demonstrateur.ipynb Dockerfile README.md serious_game test
bash-5.1$ docker build Dockerfile .
Usage: docker build [OPTIONS] PATH | URL | -
Build an image from a Dockerfile
I also tried with
bash-5.1$ docker build -t serious-game:0.0.1 -t serious-game:latest Dockerfile .
and already completely reinstalled docker by following this tutorial but it gave the same error.
Here is my Dockerfile content:
bash-5.1$ cat Dockerfile
FROM nvidia/cuda:10.2-base-ubuntu18.04
MAINTAINER me
EXPOSE 5555
EXPOSE 8886
ENV DEBIAN_FRONTEND noninteractive
ENV WD=/home/serious-game/
WORKDIR ${WD}
# Add git and ssh
RUN apt-get -y update && \
apt-get -y upgrade && \
apt-get -y install git ssh pkg-config python3-pip python3-opencv
# Dépendances python
COPY requirements.txt /requirements.txt
RUN cd / && \
python3 -m pip install --upgrade pip && \
pip3 install -r requirements.txt
CMD ["start.py"]
If you are trying to build an image from a local Dockerfile, given your current bash location is in the same folder where Dockerfile resides - all you have to do is
docker build .
As written in the docs, Docker uses the file named Dockerfile by default. If you want to specify the file you can use the option --file or -f of the docker build command.
In your case you can just use to solve your problem:
docker build -t serious-game:0.0.1 -t serious-game:latest .
But if you want to specify another file named TestDockerfile (example for testing):
docker build -t serious-game:0.0.1 -t serious-game:latest -f TestDockerfile .
I am using a Dockerfile to install a tool. I am running docker build -f Dockerfile -t ubuntu:mytool . command to initiate the build. The line RUN ./toolPackageInstaller expects two user inputs (1) install path selection and (2) an integer for timezone info halfway through the installation. How do I hardcode this info in a dockerfile or run docker build in interactive mode so the user can input these values during the install process?
FROM ubuntu:bionic
WORKDIR /tmp
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
build-essential \
sudo \
git \
make
COPY ToolPackage.tar.xz /tmp
RUN tar xvfJp /tmp/ToolPackage.tar.xz
WORKDIR /tmp/ToolPackage
RUN chmod +x toolPackageInstaller
RUN ./toolPackageInstaller
Use build arguments for those desired arguments:
FROM ubuntu:bionic
ARG ARGUMENT_1=<HARDCODED_VALUE>
ARG ARGUMENT_2=<HARDCODED_VALUE>
WORKDIR /tmp
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
build-essential \
sudo \
git \
make
COPY ToolPackage.tar.xz /tmp
RUN tar xvfJp /tmp/ToolPackage.tar.xz
WORKDIR /tmp/ToolPackage
RUN chmod +x toolPackageInstaller
RUN ./toolPackageInstaller $ARGUMENT_1 $ARGUMENT_2
And configure the script on toolPackageInstaller to use those values as input (referring to them with $1 and $2)
By default it will run with the hardcoded value, and also you can override it if you desire:
docker build --build-arg ARGUMENT_1=<NEW_VALUE> --build-arg ARGUMENT_2=<ANOTHER_NEW_VALUE> -t ubuntu:mytool .
I am basing my dockerfile on the rust base image.
When deploying my image to an azure container, I receive this log:
./bot: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./bot)
./bot is my application.
The error also occurs when I perform docker run on my Linux Mint desktop.
How can I get GLIBC into my container?
Dockerfile
FROM rust:1.50
WORKDIR /usr/vectorizer/
COPY ./Cargo.toml /usr/vectorizer/Cargo.toml
COPY ./target/release/trampoline /usr/vectorizer/trampoline
COPY ./target/release/bot /usr/vectorizer/bot
COPY ./target/release/template.svg /usr/vectorizer/template.svg
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y musl-tools && \
rustup target add x86_64-unknown-linux-musl
CMD ["./trampoline"]
Now I don't totally understand the dependencies of your particular project but the below Dockerfile should get you started.
What you want to do is compile in an image that has all of your dev dependencies and then move the build artifacts to a much smaller (but compatible) image.
FROM rust:1.50 as builder
RUN USER=root
RUN mkdir bot
WORKDIR /bot
ADD . ./
RUN cargo clean && \
cargo build -vv --release
FROM debian:buster-slim
ARG APP=/usr/src/app
ENV APP_USER=appuser
RUN groupadd $APP_USER \
&& useradd -g $APP_USER $APP_USER \
&& mkdir -p ${APP}
# Copy the compiled binaries into the new container.
COPY --from=builder /bot/target/release/bot ${APP}/bot
RUN chown -R $APP_USER:$APP_USER ${APP}
USER $APP_USER
WORKDIR ${APP}
CMD ["./trampoline"]
I am trying to automate a docker build in Jenkins pipeline. In my dockerfile, I basically build a node application. In my npm install, I have some private git repositories which need os bindings and so have to be installed in the container. When I run this manually, I transfer my ssh keys (id_rsa) to dockerfile which is used for doing npm install. Now, my problem is when running this task in jenkins pipeline, I will be configuring a ssh-agent(Jenkins plugin). It will not be possible to extract private key from ssh-agent. How should I pass my ssh-agent to my dockerfile.
EDIT 1:
I got it partially working by this:
Docker Build Command:
DOCKER_BUILDKIT=1 docker build --no-cache -t $DOCKER_REGISTRY_URL/$IMAGE_NAME:v$BUILD_NUMBER --ssh default . &&
Then in Docker file:
This works fine:
RUN --mount=type=ssh GIT_SSH_COMMAND="ssh -vvvT -o StrictHostKeyChecking=no"
git clone git#github.com:****
Weird thing is this doesn't work:
RUN --mount=type=ssh GIT_SSH_COMMAND="ssh -vvvT -o StrictHostKeyChecking=no" npm install git+ssh//git#github.com:****
I feel this is something to do with StrictHostKeyChecking=no
I finally got it working by using ROOT user in Dockerfile and setting the npm cache to root.
The problem was that git was using the /root/.ssh folder while npm was using a different path - /home/.ssh as it's npm cache was set on /home/.ssh
For anyone still struggling, this is the config I used
Docker Build Command:
DOCKER_BUILDKIT=1 docker build --no-cache -t test --ssh default .
Dockerfile:
USER root
RUN apt-get update && \
apt-get install -y \
git \
openssh-server \
openssh-client
RUN mkdir -p -m 600 /root/.ssh && ssh-keyscan github.com >> /root/.ssh/known_hosts && echo "Host *\n StrictHostKeyChecking no" > /root/.ssh/config
RUN echo "Check ssh_config" && cat /root/.ssh/config
RUN rm -rf node_modules
RUN npm config set cache /root
RUN --mount=type=ssh GIT_SSH_COMMAND="ssh -vvvT" npm install
I am using a Dockerfile multistage configuration similar to the one below.
FROM swift:4.1
WORKDIR /app
COPY . .
RUN swift build --configuration release && mv `swift build -c release --show-bin-path` /build/bin
FROM ubuntu:16.04
RUN apt-get -qq update && apt-get install -y \
libicu55 libxml2 libbsd0 libcurl3 libatomic1 wget && rm -r /var/lib/apt/lists/*
RUN /bin/bash -c "$(wget -qO- https://apt.vapor.sh)"
RUN wget -q https://repo.vapor.codes/apt/keyring.gpg -O- | apt-key add -
RUN apt-get update && apt-get install swift vapor -y
WORKDIR /app
COPY --from=builder /build/bin .
COPY --from=builder /build/lib/* /usr/lib/
EXPOSE 3000
ENTRYPOINT ./Run serve -e prod -b 0.0.0.0 -p 3000
I am currently using this to deploy my service in a virtual server, which due to its low performance takes forever to build the project.
Is it a good practice, and possible, to build and upload to a private repo in docker hub the image result of the builder, so I can do it from my local machine?
Could I then just have the second step in my virtual server? That means:
FROM myPrivateImageBuiltLocally as image
WORKDIR /app
COPY . .
FROM ubuntu:16.04
RUN apt-get -qq update && apt-get install -y \
libicu55 libxml2 libbsd0 libcurl3 libatomic1 wget && rm -r /var/lib/apt/lists/*
RUN /bin/bash -c "$(wget -qO- https://apt.vapor.sh)"
RUN wget -q https://repo.vapor.codes/apt/keyring.gpg -O- | apt-key add -
RUN apt-get update && apt-get install swift vapor -y
WORKDIR /app
COPY --from=builder /build/bin .
COPY --from=builder /build/lib/* /usr/lib/
EXPOSE 3000
ENTRYPOINT ./Run serve -e prod -b 0.0.0.0 -p 3000
Yes you can do it. You don't have to build it locally. You can use the automated build feature of dockerhub. It works like this.
1). Push the code to github/bitbucket
2). Create new image in dockerhub and map to the github repo
This will automatically build the image each time when you push a new commit to the github repo.
You can also see all the stats like build logs, Succss or failure, number of downloads etc...
ref: https://docs.docker.com/docker-cloud/builds/automated-build/#configure-automated-build-settings