I am basing my dockerfile on the rust base image.
When deploying my image to an azure container, I receive this log:
./bot: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./bot)
./bot is my application.
The error also occurs when I perform docker run on my Linux Mint desktop.
How can I get GLIBC into my container?
Dockerfile
FROM rust:1.50
WORKDIR /usr/vectorizer/
COPY ./Cargo.toml /usr/vectorizer/Cargo.toml
COPY ./target/release/trampoline /usr/vectorizer/trampoline
COPY ./target/release/bot /usr/vectorizer/bot
COPY ./target/release/template.svg /usr/vectorizer/template.svg
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y musl-tools && \
rustup target add x86_64-unknown-linux-musl
CMD ["./trampoline"]
Now I don't totally understand the dependencies of your particular project but the below Dockerfile should get you started.
What you want to do is compile in an image that has all of your dev dependencies and then move the build artifacts to a much smaller (but compatible) image.
FROM rust:1.50 as builder
RUN USER=root
RUN mkdir bot
WORKDIR /bot
ADD . ./
RUN cargo clean && \
cargo build -vv --release
FROM debian:buster-slim
ARG APP=/usr/src/app
ENV APP_USER=appuser
RUN groupadd $APP_USER \
&& useradd -g $APP_USER $APP_USER \
&& mkdir -p ${APP}
# Copy the compiled binaries into the new container.
COPY --from=builder /bot/target/release/bot ${APP}/bot
RUN chown -R $APP_USER:$APP_USER ${APP}
USER $APP_USER
WORKDIR ${APP}
CMD ["./trampoline"]
Related
I am building a container, you can see the docker file, its for rust app deployment on Argonaut. but its not able to start. Here you can see the Dockerfile.
FROM rust:1.64.0-buster AS builder
WORKDIR /app
ARG TOKEN
ARG DATABASE_URL
RUN git config --global url."https://${TOKEN}:#github.com/".insteadOf "https://github.com/"
COPY . .
ENV CARGO_NET_GIT_FETCH_WITH_CLI true
RUN rustup component add rustfmt
RUN apt-get update -y && apt-get install git wget ca-certificates curl gnupg lsb-release cmake libcurl4 -y
RUN cargo build
FROM debian:buster-slim
WORKDIR /app
COPY --from=builder /app/target/debug/linkedin /app/target/release/linkedin
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
CMD ["/app/target/release/linkedin"]
EXPOSE 3000
It builds successfully but when it works it gets exit with error code 127.
linkedin-leadr-1 | /app/target/release/linkedin: error while loading shared libraries: libcurl.so.4: cannot open shared object file: No such file or directory
Have not found what's wrong with it, even though I am installing libcurl4. but my docker container is not able to find it. Can you please give me the solution?
As you install libcurl4 in your build environment but not in your execution environment, that's most likely the reason.
There are two ways to solve this:
Install libcurl4 in your final image, or
Link statically by replacing cargo build with
RUN rustup target add x86_64-unknown-linux-musl
RUN cargo build --target=x86_64-unknown-linux-musl --release
The --release flag should get added either way, as I'm sure you don't want to deliver unoptimized debug builds to your enduser ;)
Note that if you choose to install libcurl4 in your final image, you need to clean up the apt cache afterwards, otherwise your image grows immensely:
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --yes \
libcurl4 \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
The full Dockerfile with libcurl4 installed would then look like this:
FROM rust:1.64.0-buster AS builder
WORKDIR /app
ARG TOKEN
ARG DATABASE_URL
RUN git config --global url."https://${TOKEN}:#github.com/".insteadOf "https://github.com/"
COPY . .
ENV CARGO_NET_GIT_FETCH_WITH_CLI true
RUN rustup component add rustfmt
RUN apt-get update -y && apt-get install git wget ca-certificates curl gnupg lsb-release cmake libcurl4 -y
RUN cargo build
# Copy the libcurl shared library from the builder stage into the final container
RUN mkdir -p /usr/local/lib && \
cp /usr/lib/x86_64-linux-gnu/libcurl.so.4 /usr/local/lib && \
ln -s /usr/local/lib/libcurl.so.4 /usr/local/lib/libcurl.so
FROM debian:buster-slim
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --yes \
libcurl4 \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /app
COPY --from=builder /app/target/debug/linkedin /app/target/release/linkedin
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
CMD ["/app/target/release/linkedin"]
EXPOSE 3000
I have a docker file in which I do wget to copy something in the image. But the build is failing giving 'wget command not found'. WHen i googled I found suggestions to install wget like below
RUN apt update && apt upgrade
RUN apt install wget
Docker File:
FROM openjdk:17
LABEL maintainer="app"
ARG uname
ARG pwd
RUN useradd -ms /bin/bash -u 1000 user1
COPY . /app
WORKDIR /app
RUN ./gradlew build -PmavenUsername=$uname -PmavenPassword=$pwd
ARG YOURKIT_VERSION=2021.11
ARG POLARIS_YK_DIR=YourKit-JavaProfiler-2019.8
RUN wget https://www.yourkit.com/download/docker/YourKit-JavaProfiler-${YOURKIT_VERSION}-docker.zip --no-check-certificate -P /tmp/ && \
unzip /tmp/YourKit-JavaProfiler-${YOURKIT_VERSION}-docker.zip -d /usr/local && \
mv /usr/local/YourKit-JavaProfiler-${YOURKIT_VERSION} /usr/local/$POLARIS_YK_DIR && \
rm /tmp/YourKit-JavaProfiler-${YOURKIT_VERSION}-docker.zip
EXPOSE 10001
EXPOSE 8080
EXPOSE 5005
USER 1000
ENTRYPOINT ["sh", "/docker_entrypoint.sh"]
On doing this I am getting error app-get not found. Can some one suggest any solution.
The openjdk image you use is based on Oracle Linux which uses microdnf rather than apt as it's package manager.
To install wget (and unzip which you also need), you can add this to your Dockerfile:
RUN microdnf update \
&& microdnf install --nodocs wget unzip \
&& microdnf clean all \
&& rm -rf /var/cache/yum
The commands clean up the package cache after installing, to keep the image size as small as possible.
Hi I have a docker file which is failing on the COPY command. It was running fine initially but then it suddenly crashed during the build process. The Docker file basically sets up the dev environment and authenticate with GCP.
FROM ubuntu:16.04
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
# Update and Install packages
RUN apt-get update -y \
&& apt-get install -y \
curl \
wget \
tar \
xz-utils \
bc \
build-essential \
cmake \
curl \
zlib1g-dev \
libssl-dev \
libsqlite3-dev \
python3-pip \
python3-setuptools \
unzip \
g++ \
git \
python-tk
# Install Python 3.6.5
RUN wget https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tar.xz \
&& tar -xvf Python-${PYTHON_VERSION}.tar.xz \
&& rm -rf Python-${PYTHON_VERSION}.tar.xz \
&& cd Python-${PYTHON_VERSION} \
&& ./configure \
&& make install \
&& cd / \
&& rm -rf Python-${PYTHON_VERSION}
# Install pip
RUN curl -O https://bootstrap.pypa.io/get-pip.py \
&& python3 get-pip.py \
&& rm get-pip.py
# Add SNI support to Python
RUN pip --no-cache-dir install \
pyopenssl \
ndg-httpsclient \
pyasn1
## Download and Install Google Cloud SDK
RUN mkdir -p /usr/local/gcloud \
&& curl https://sdk.cloud.google.com > install.sh \
&& bash install.sh --disable-prompts --install-dir=${DIRECTORY}
# Adding the package path to directory
ENV PATH $PATH:${DIRECTORY}/google-cloud-sdk/bin
# working directory
WORKDIR /usr/src/app
COPY requirements.txt ./ \
testproject-264512-9de8b1b35153.json ./
It fails at this step :
Step 13/21 : COPY requirements.txt ./ testproject-264512-9de8b1b35153.json ./
COPY failed: stat /var/lib/docker/tmp/docker-builder942576416/testproject-264512-9de8b1b35153.json: no such file or directory
Any leads in this would be helpful.
How are you running docker build command?
In docker best practices I've read that docker fails if you try to build your image from stdin using -
Attempting to build a Dockerfile that uses COPY or ADD will fail if this syntax is used. The following example illustrates this:
# create a directory to work in
mkdir example
cd example
# create an example file
touch somefile.txt
docker build -t myimage:latest -<<EOF
FROM busybox
COPY somefile.txt .
RUN cat /somefile.txt
EOF
# observe that the build fails
...
Step 2/3 : COPY somefile.txt .
COPY failed: stat /var/lib/docker/tmp/docker-builder249218248/somefile.txt: no such file or directory
I've reproduced issue... Here is my Dockerfile:
FROM alpine:3.7
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
# working directory
WORKDIR /usr/src/app
COPY kk.txt ./ \
kk.2.txt ./
If I create image by running docker build -t testImage:1 [DOCKERFILE_FOLDER], docker creates image and works correctly.
However if I try the same command from stdin as:
docker build -t test:2 - <<EOF
FROM alpine:3.7
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
WORKDIR /usr/src/app
COPY kk.txt ./ kk.2.txt ./
EOF
I get the following error:
Step 1/6 : FROM alpine:3.7
---> 6d1ef012b567
Step 2/6 : ENV PYTHON_VERSION="3.6.5"
---> Using cache
---> 734d2a106144
Step 3/6 : ENV BUCKET_NAME='detection-sandbox'
---> Using cache
---> 18fba29fffdc
Step 4/6 : ENV DIRECTORY='/usr/local/gcloud'
---> Using cache
---> d926a3b4bc85
Step 5/6 : WORKDIR /usr/src/app
---> Using cache
---> 57a1868f5f27
Step 6/6 : COPY kk.txt ./ kk.2.txt ./
COPY failed: stat /var/lib/docker/tmp/docker-builder518467298/kk.txt: no such file or directory
It seems that docker build images from /var/lib/docker/tmp/ if you build image from stdin, thus ADD or COPY commands don't work.
Incorrect path in source is a common error.
Use
COPY ./directory/testproject-264512-9de8b1b35153.json /dir/
instead of
COPY testproject-264512-9de8b1b35153.json /dir/
In the Alpine linux package site https://pkgs.alpinelinux.org/packages
NSCA packages are yet to get added. Is there an alternative to setup NSCA in Alpine Linux for passive-check?
If there is no package for it, you can always build it yourself.
FROM alpine AS builder
ARG NSCA_VERSION=2.9.2
RUN apk update && apk add build-base build-base gcc wget git
RUN wget http://prdownloads.sourceforge.net/nagios/nsca-$NSCA_VERSION.tar.gz
RUN tar xzf nsca-$NSCA_VERSION.tar.gz
RUN cd nsca-$NSCA_VERSION&& ./configure && make all
RUN ls -lah nsca-$NSCA_VERSION/src
RUN mkdir -p /dist/bin && cp nsca-$NSCA_VERSION/src/nsca /dist/bin
RUN mkdir -p /dist/etc && cp nsca-$NSCA_VERSION/sample-config/nsca.cfg /dist/etc
FROM alpine
COPY --from=builder /dist/bin/nsca /bin/
COPY --from=builder /dist/etc/nsca.cfg /etc/
Since this is using multiple stages, your resulting image will not contain development files and will still be small.
I am using a Dockerfile multistage configuration similar to the one below.
FROM swift:4.1
WORKDIR /app
COPY . .
RUN swift build --configuration release && mv `swift build -c release --show-bin-path` /build/bin
FROM ubuntu:16.04
RUN apt-get -qq update && apt-get install -y \
libicu55 libxml2 libbsd0 libcurl3 libatomic1 wget && rm -r /var/lib/apt/lists/*
RUN /bin/bash -c "$(wget -qO- https://apt.vapor.sh)"
RUN wget -q https://repo.vapor.codes/apt/keyring.gpg -O- | apt-key add -
RUN apt-get update && apt-get install swift vapor -y
WORKDIR /app
COPY --from=builder /build/bin .
COPY --from=builder /build/lib/* /usr/lib/
EXPOSE 3000
ENTRYPOINT ./Run serve -e prod -b 0.0.0.0 -p 3000
I am currently using this to deploy my service in a virtual server, which due to its low performance takes forever to build the project.
Is it a good practice, and possible, to build and upload to a private repo in docker hub the image result of the builder, so I can do it from my local machine?
Could I then just have the second step in my virtual server? That means:
FROM myPrivateImageBuiltLocally as image
WORKDIR /app
COPY . .
FROM ubuntu:16.04
RUN apt-get -qq update && apt-get install -y \
libicu55 libxml2 libbsd0 libcurl3 libatomic1 wget && rm -r /var/lib/apt/lists/*
RUN /bin/bash -c "$(wget -qO- https://apt.vapor.sh)"
RUN wget -q https://repo.vapor.codes/apt/keyring.gpg -O- | apt-key add -
RUN apt-get update && apt-get install swift vapor -y
WORKDIR /app
COPY --from=builder /build/bin .
COPY --from=builder /build/lib/* /usr/lib/
EXPOSE 3000
ENTRYPOINT ./Run serve -e prod -b 0.0.0.0 -p 3000
Yes you can do it. You don't have to build it locally. You can use the automated build feature of dockerhub. It works like this.
1). Push the code to github/bitbucket
2). Create new image in dockerhub and map to the github repo
This will automatically build the image each time when you push a new commit to the github repo.
You can also see all the stats like build logs, Succss or failure, number of downloads etc...
ref: https://docs.docker.com/docker-cloud/builds/automated-build/#configure-automated-build-settings