Docker container exit with error code error libcurl not found - docker

I am building a container, you can see the docker file, its for rust app deployment on Argonaut. but its not able to start. Here you can see the Dockerfile.
FROM rust:1.64.0-buster AS builder
WORKDIR /app
ARG TOKEN
ARG DATABASE_URL
RUN git config --global url."https://${TOKEN}:#github.com/".insteadOf "https://github.com/"
COPY . .
ENV CARGO_NET_GIT_FETCH_WITH_CLI true
RUN rustup component add rustfmt
RUN apt-get update -y && apt-get install git wget ca-certificates curl gnupg lsb-release cmake libcurl4 -y
RUN cargo build
FROM debian:buster-slim
WORKDIR /app
COPY --from=builder /app/target/debug/linkedin /app/target/release/linkedin
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
CMD ["/app/target/release/linkedin"]
EXPOSE 3000
It builds successfully but when it works it gets exit with error code 127.
linkedin-leadr-1 | /app/target/release/linkedin: error while loading shared libraries: libcurl.so.4: cannot open shared object file: No such file or directory
Have not found what's wrong with it, even though I am installing libcurl4. but my docker container is not able to find it. Can you please give me the solution?

As you install libcurl4 in your build environment but not in your execution environment, that's most likely the reason.
There are two ways to solve this:
Install libcurl4 in your final image, or
Link statically by replacing cargo build with
RUN rustup target add x86_64-unknown-linux-musl
RUN cargo build --target=x86_64-unknown-linux-musl --release
The --release flag should get added either way, as I'm sure you don't want to deliver unoptimized debug builds to your enduser ;)
Note that if you choose to install libcurl4 in your final image, you need to clean up the apt cache afterwards, otherwise your image grows immensely:
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --yes \
libcurl4 \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
The full Dockerfile with libcurl4 installed would then look like this:
FROM rust:1.64.0-buster AS builder
WORKDIR /app
ARG TOKEN
ARG DATABASE_URL
RUN git config --global url."https://${TOKEN}:#github.com/".insteadOf "https://github.com/"
COPY . .
ENV CARGO_NET_GIT_FETCH_WITH_CLI true
RUN rustup component add rustfmt
RUN apt-get update -y && apt-get install git wget ca-certificates curl gnupg lsb-release cmake libcurl4 -y
RUN cargo build
# Copy the libcurl shared library from the builder stage into the final container
RUN mkdir -p /usr/local/lib && \
cp /usr/lib/x86_64-linux-gnu/libcurl.so.4 /usr/local/lib && \
ln -s /usr/local/lib/libcurl.so.4 /usr/local/lib/libcurl.so
FROM debian:buster-slim
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --yes \
libcurl4 \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /app
COPY --from=builder /app/target/debug/linkedin /app/target/release/linkedin
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
CMD ["/app/target/release/linkedin"]
EXPOSE 3000

Related

Run 32bit app nn ubuntu 20.04 docker container

I built a ubuntu image using the following Dockerfile:
FROM ubuntu:20.04
# Disable Prompt During Packages Installation
ARG DEBIAN_FRONTEND=noninteractive
# Add 32bit architecture
RUN dpkg --add-architecture i386 \
&& apt-get update \
&& apt-get install -y libc6:i386 libncurses5:i386 libstdc++6:i386 zlib1g:i386
RUN apt-get update && apt-get install -y locales && rm -rf /var/lib/apt/lists/* \
&& localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8
ENV LANG en_US.utf8
RUN apt-get update && apt-get install -y \
iputils-ping \
python3 python3-pip
# Copy app to container
COPY . /app
WORKDIR /app
# Install pip requirements
COPY requirements.txt /app
RUN python3 -m pip install -r requirements.txt
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["bash"]
I've been trying to run a 32bit app (hence the first run command in the Dockerfile) I have inside the my_app directory using:
./app
but I keep getting
bash: ./app: No such file or directory
I build your docker file with no error, do you have more detail ?

version `GLIBC_2.29' not found

I am basing my dockerfile on the rust base image.
When deploying my image to an azure container, I receive this log:
./bot: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./bot)
./bot is my application.
The error also occurs when I perform docker run on my Linux Mint desktop.
How can I get GLIBC into my container?
Dockerfile
FROM rust:1.50
WORKDIR /usr/vectorizer/
COPY ./Cargo.toml /usr/vectorizer/Cargo.toml
COPY ./target/release/trampoline /usr/vectorizer/trampoline
COPY ./target/release/bot /usr/vectorizer/bot
COPY ./target/release/template.svg /usr/vectorizer/template.svg
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y musl-tools && \
rustup target add x86_64-unknown-linux-musl
CMD ["./trampoline"]
Now I don't totally understand the dependencies of your particular project but the below Dockerfile should get you started.
What you want to do is compile in an image that has all of your dev dependencies and then move the build artifacts to a much smaller (but compatible) image.
FROM rust:1.50 as builder
RUN USER=root
RUN mkdir bot
WORKDIR /bot
ADD . ./
RUN cargo clean && \
cargo build -vv --release
FROM debian:buster-slim
ARG APP=/usr/src/app
ENV APP_USER=appuser
RUN groupadd $APP_USER \
&& useradd -g $APP_USER $APP_USER \
&& mkdir -p ${APP}
# Copy the compiled binaries into the new container.
COPY --from=builder /bot/target/release/bot ${APP}/bot
RUN chown -R $APP_USER:$APP_USER ${APP}
USER $APP_USER
WORKDIR ${APP}
CMD ["./trampoline"]

Dockerfile is not working properly when using WORKDIR command?

Here's my Dockerfile that I want to use for one of my web-api using python fastapi, but whenever I try to built it, I am getting the below given error.
FROM tiangolo/uvicorn-gunicorn:python3.8
RUN apt-get update && \
apt-get upgrade -y && \
apt-get dist-upgrade -y && \
apt-get autoremove -y && \
apt-get clean && \
apt-get autoclean && \
apt-get install -y gcc make apt-transport-https ca-certificates build-essential
RUN apt-get install -y curl autoconf automake libtool pkg-config git
RUN git clone https://github.com/openvenues/libpostal
WORKDIR /libpostal
RUN ./bootstrap.sh
RUN libpostal/configure --datadir=/opt
RUN libpostal/make -j $(nproc)
RUN libpostal/make install && ldconfig
ENV PORT 8000
ENV APP_MODULE app.parser:app
ENV LOG_LEVEL debug
ENV WEB_CONCURRENCY 2
COPY ./requirements/base.txt ./requirements/base.txt
RUN pip install --no-cache-dir -r requirements/base.txt
COPY ./app /app/app
Whenever I run this I am getting this below error,
Sending build context to Docker daemon 4.262GB
Step 1/18 : FROM tiangolo/uvicorn-gunicorn:python3.8
---> 524e010ef786
Step 2/18 : ENV ENVIRONMENT staging
---> Using cache
---> d3e496ea9bbe
Step 3/18 : RUN apt-get update && apt-get upgrade -y && apt-get dist-upgrade -y && apt-get autoremove -y && apt-get clean && apt-get autoclean && apt-get install -y gcc make apt-transport-https ca-certificates build-essential
---> Using cache
---> cf3c1a8556e0
Step 4/18 : RUN apt-get install -y curl autoconf automake libtool pkg-config git
---> Using cache
---> 77879c6f66e9
Step 5/18 : RUN git clone https://github.com/openvenues/libpostal
---> Using cache
---> f1f7cf06e398
Step 6/18 : WORKDIR /libpostal
---> Running in 51191c3a69cb
Removing intermediate container 51191c3a69cb
---> d98ff97331db
Step 7/18 : RUN ./bootstrap.sh
---> Running in 40fd37f4900b
/bin/sh: 1: ./bootstrap.sh: not found
The command '/bin/sh -c ./bootstrap.sh' returned a non-zero code: 127
Please tell me what am I doing wrong in the Dockerfile?
The default WORKDIR for your base image tiangolo/uvicorn-gunicorn:python3.8 is /app. I believe this is the Dockerfile for the base. When you cloned the repo, you were actually running it in /app.
You can explicitly set WORKDIR / or specify WORKDIR /app/libpostal to successfully run the bootstrap script.
You should also adjust your paths in the RUN commands after cloning since they should be relative. Here are the changes I suggest:
Option 1
# this command is run in the /app folder, a default set in the base image
RUN git clone https://github.com/openvenues/libpostal
WORKDIR /app/libpostal
RUN ./bootstrap.sh
RUN ./configure --datadir=/opt
RUN make -j $(nproc)
RUN make install && ldconfig
Option 2
# explicitly set working directory in root
WORKDIR /
RUN git clone https://github.com/openvenues/libpostal
WORKDIR /libpostal
RUN ./bootstrap.sh
RUN ./configure --datadir=/opt
RUN make -j $(nproc)
RUN make install && ldconfig

Setting up our Rasa/NLU container, error?

I have this file Dockerfile.nlu
FROM chatbot/spacy:latest
WORKDIR /app
COPY nlu ./agent_nlu
RUN python –m rasa_nlu.train --config agent_nlu/config.yml --data agent_nlu/data/ --path agent_nlu/agent --fixed_model_name default
and I get the error below:
]$ sudo docker build -t nlu:latest -f docker/Dockerfile.nlu .
Sending build context to Docker daemon 9.216kB
Step 1/4 : FROM chatbot/spacy:latest
---> 496dc6a38abb
Step 2/4 : WORKDIR /app
---> Using cache
---> 7f02012c8452
Step 3/4 : COPY nlu ./agent_nlu
COPY failed: stat /var/lib/docker/tmp/docker-builder363868051/nlu: no such file or directory
It doesn't look like Docker can find the nlu directory. Are you sure it exists? Are you sure that you are executing the command from the correct directory?
But you also aren't installing Rasa at all or any of it's requirements. Is there a reason you aren't using the pre-built Rasa images? available here with docs here.
Here is a fully functional Docker file pulled from their repo.
FROM python:3.6-slim
ENV RASA_NLU_DOCKER="YES" \
RASA_NLU_HOME=/app \
RASA_NLU_PYTHON_PACKAGES=/usr/local/lib/python3.6/dist-packages
# Run updates, install basics and cleanup
# - build-essential: Compile specific dependencies
# - git-core: Checkout git repos
RUN apt-get update -qq \
&& apt-get install -y --no-install-recommends build-essential git-core openssl libssl-dev libffi6 libffi-dev curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR ${RASA_NLU_HOME}
COPY . ${RASA_NLU_HOME}
# use bash always
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN pip install -r alt_requirements/requirements_spacy_sklearn.txt
RUN pip install -e .
RUN pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_md-2.0.0/en_core_web_md-2.0.0.tar.gz --no-cache-dir > /dev/null \
&& python -m spacy link en_core_web_md en \
&& pip install https://github.com/explosion/spacy-models/releases/download/de_core_news_sm-2.0.0/de_core_news_sm-2.0.0.tar.gz --no-cache-dir > /dev/null \
&& python -m spacy link de_core_news_sm de
COPY sample_configs/config_spacy.yml ${RASA_NLU_HOME}/config.yml
VOLUME ["/app/projects", "/app/logs", "/app/data"]
EXPOSE 5000
ENTRYPOINT ["./entrypoint.sh"]
CMD ["start", "-c", "config.yml", "--path", "/app/projects"]

Private docker container to release

I am using a Dockerfile multistage configuration similar to the one below.
FROM swift:4.1
WORKDIR /app
COPY . .
RUN swift build --configuration release && mv `swift build -c release --show-bin-path` /build/bin
FROM ubuntu:16.04
RUN apt-get -qq update && apt-get install -y \
libicu55 libxml2 libbsd0 libcurl3 libatomic1 wget && rm -r /var/lib/apt/lists/*
RUN /bin/bash -c "$(wget -qO- https://apt.vapor.sh)"
RUN wget -q https://repo.vapor.codes/apt/keyring.gpg -O- | apt-key add -
RUN apt-get update && apt-get install swift vapor -y
WORKDIR /app
COPY --from=builder /build/bin .
COPY --from=builder /build/lib/* /usr/lib/
EXPOSE 3000
ENTRYPOINT ./Run serve -e prod -b 0.0.0.0 -p 3000
I am currently using this to deploy my service in a virtual server, which due to its low performance takes forever to build the project.
Is it a good practice, and possible, to build and upload to a private repo in docker hub the image result of the builder, so I can do it from my local machine?
Could I then just have the second step in my virtual server? That means:
FROM myPrivateImageBuiltLocally as image
WORKDIR /app
COPY . .
FROM ubuntu:16.04
RUN apt-get -qq update && apt-get install -y \
libicu55 libxml2 libbsd0 libcurl3 libatomic1 wget && rm -r /var/lib/apt/lists/*
RUN /bin/bash -c "$(wget -qO- https://apt.vapor.sh)"
RUN wget -q https://repo.vapor.codes/apt/keyring.gpg -O- | apt-key add -
RUN apt-get update && apt-get install swift vapor -y
WORKDIR /app
COPY --from=builder /build/bin .
COPY --from=builder /build/lib/* /usr/lib/
EXPOSE 3000
ENTRYPOINT ./Run serve -e prod -b 0.0.0.0 -p 3000
Yes you can do it. You don't have to build it locally. You can use the automated build feature of dockerhub. It works like this.
1). Push the code to github/bitbucket
2). Create new image in dockerhub and map to the github repo
This will automatically build the image each time when you push a new commit to the github repo.
You can also see all the stats like build logs, Succss or failure, number of downloads etc...
ref: https://docs.docker.com/docker-cloud/builds/automated-build/#configure-automated-build-settings

Resources