Here's my Dockerfile that I want to use for one of my web-api using python fastapi, but whenever I try to built it, I am getting the below given error.
FROM tiangolo/uvicorn-gunicorn:python3.8
RUN apt-get update && \
apt-get upgrade -y && \
apt-get dist-upgrade -y && \
apt-get autoremove -y && \
apt-get clean && \
apt-get autoclean && \
apt-get install -y gcc make apt-transport-https ca-certificates build-essential
RUN apt-get install -y curl autoconf automake libtool pkg-config git
RUN git clone https://github.com/openvenues/libpostal
WORKDIR /libpostal
RUN ./bootstrap.sh
RUN libpostal/configure --datadir=/opt
RUN libpostal/make -j $(nproc)
RUN libpostal/make install && ldconfig
ENV PORT 8000
ENV APP_MODULE app.parser:app
ENV LOG_LEVEL debug
ENV WEB_CONCURRENCY 2
COPY ./requirements/base.txt ./requirements/base.txt
RUN pip install --no-cache-dir -r requirements/base.txt
COPY ./app /app/app
Whenever I run this I am getting this below error,
Sending build context to Docker daemon 4.262GB
Step 1/18 : FROM tiangolo/uvicorn-gunicorn:python3.8
---> 524e010ef786
Step 2/18 : ENV ENVIRONMENT staging
---> Using cache
---> d3e496ea9bbe
Step 3/18 : RUN apt-get update && apt-get upgrade -y && apt-get dist-upgrade -y && apt-get autoremove -y && apt-get clean && apt-get autoclean && apt-get install -y gcc make apt-transport-https ca-certificates build-essential
---> Using cache
---> cf3c1a8556e0
Step 4/18 : RUN apt-get install -y curl autoconf automake libtool pkg-config git
---> Using cache
---> 77879c6f66e9
Step 5/18 : RUN git clone https://github.com/openvenues/libpostal
---> Using cache
---> f1f7cf06e398
Step 6/18 : WORKDIR /libpostal
---> Running in 51191c3a69cb
Removing intermediate container 51191c3a69cb
---> d98ff97331db
Step 7/18 : RUN ./bootstrap.sh
---> Running in 40fd37f4900b
/bin/sh: 1: ./bootstrap.sh: not found
The command '/bin/sh -c ./bootstrap.sh' returned a non-zero code: 127
Please tell me what am I doing wrong in the Dockerfile?
The default WORKDIR for your base image tiangolo/uvicorn-gunicorn:python3.8 is /app. I believe this is the Dockerfile for the base. When you cloned the repo, you were actually running it in /app.
You can explicitly set WORKDIR / or specify WORKDIR /app/libpostal to successfully run the bootstrap script.
You should also adjust your paths in the RUN commands after cloning since they should be relative. Here are the changes I suggest:
Option 1
# this command is run in the /app folder, a default set in the base image
RUN git clone https://github.com/openvenues/libpostal
WORKDIR /app/libpostal
RUN ./bootstrap.sh
RUN ./configure --datadir=/opt
RUN make -j $(nproc)
RUN make install && ldconfig
Option 2
# explicitly set working directory in root
WORKDIR /
RUN git clone https://github.com/openvenues/libpostal
WORKDIR /libpostal
RUN ./bootstrap.sh
RUN ./configure --datadir=/opt
RUN make -j $(nproc)
RUN make install && ldconfig
Related
I am building a container, you can see the docker file, its for rust app deployment on Argonaut. but its not able to start. Here you can see the Dockerfile.
FROM rust:1.64.0-buster AS builder
WORKDIR /app
ARG TOKEN
ARG DATABASE_URL
RUN git config --global url."https://${TOKEN}:#github.com/".insteadOf "https://github.com/"
COPY . .
ENV CARGO_NET_GIT_FETCH_WITH_CLI true
RUN rustup component add rustfmt
RUN apt-get update -y && apt-get install git wget ca-certificates curl gnupg lsb-release cmake libcurl4 -y
RUN cargo build
FROM debian:buster-slim
WORKDIR /app
COPY --from=builder /app/target/debug/linkedin /app/target/release/linkedin
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
CMD ["/app/target/release/linkedin"]
EXPOSE 3000
It builds successfully but when it works it gets exit with error code 127.
linkedin-leadr-1 | /app/target/release/linkedin: error while loading shared libraries: libcurl.so.4: cannot open shared object file: No such file or directory
Have not found what's wrong with it, even though I am installing libcurl4. but my docker container is not able to find it. Can you please give me the solution?
As you install libcurl4 in your build environment but not in your execution environment, that's most likely the reason.
There are two ways to solve this:
Install libcurl4 in your final image, or
Link statically by replacing cargo build with
RUN rustup target add x86_64-unknown-linux-musl
RUN cargo build --target=x86_64-unknown-linux-musl --release
The --release flag should get added either way, as I'm sure you don't want to deliver unoptimized debug builds to your enduser ;)
Note that if you choose to install libcurl4 in your final image, you need to clean up the apt cache afterwards, otherwise your image grows immensely:
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --yes \
libcurl4 \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
The full Dockerfile with libcurl4 installed would then look like this:
FROM rust:1.64.0-buster AS builder
WORKDIR /app
ARG TOKEN
ARG DATABASE_URL
RUN git config --global url."https://${TOKEN}:#github.com/".insteadOf "https://github.com/"
COPY . .
ENV CARGO_NET_GIT_FETCH_WITH_CLI true
RUN rustup component add rustfmt
RUN apt-get update -y && apt-get install git wget ca-certificates curl gnupg lsb-release cmake libcurl4 -y
RUN cargo build
# Copy the libcurl shared library from the builder stage into the final container
RUN mkdir -p /usr/local/lib && \
cp /usr/lib/x86_64-linux-gnu/libcurl.so.4 /usr/local/lib && \
ln -s /usr/local/lib/libcurl.so.4 /usr/local/lib/libcurl.so
FROM debian:buster-slim
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --yes \
libcurl4 \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /app
COPY --from=builder /app/target/debug/linkedin /app/target/release/linkedin
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
CMD ["/app/target/release/linkedin"]
EXPOSE 3000
dockerfile:
FROM python:3.9
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
WORKDIR /app
RUN apt-get update && apt-get upgrade -y && \
curl -fsSL https://deb.nodesource.com/setup_17.x | bash - && \
apt-get install -y nodejs && \
This will install nodejs every time I build. Can I install nodejs only if it is not installed?
unless you change any of the layers above and including this one or clear docker cache installation would not be reexecuted
Following is my Dockerfile :-
FROM ubuntu:18.04 AS builder
RUN apt update -y
RUN apt install python3.8 -y && apt install python3-pip -y
RUN apt install build-essential automake pkg-config libtool libffi-dev libgmp-dev -y
RUN apt install libsecp256k1-dev -y
RUN apt install openjdk-8-jre -y
RUN apt install git -y
RUN apt install libkrb5-dev -y
RUN apt install vim -y
RUN mkdir /opt/app
RUN chown -R root:root /opt/app
COPY ["requirements.txt","/opt/app/requirements.txt"]
SHELL ["/bin/bash", "-c"]
WORKDIR /opt/app
RUN pip3 install -r requirements.txt && apt-get -y clean all
RUN mkdir /opt/app/
RUN chown -R root:root /opt/app/
RUN cd /opt/app/
RUN git clone -b master https://bitbucket.org/heroes/test.git
CMD ["bash","/opt/app/bin/connect.sh"]
Docker image is generating with an image file size of 1.7G. I need to have OpenJDK hence cannot use a standard python package as a base package. When I perform docker history , I can see 2 or 3 layers (installing packages above like Python3.8, OpenJDK and libsecp256k1-dev) taking up to 400MB to 500MB in size. Ubuntu as a base image takes only 64 MB however rest of size is taking by my dockerfile layers.
I believe I need to re-write the dockerfile in order to reduce the file size which I did but nothing happened concrete.
Please assist me on reducing the image less than 1 GB at least.
[Update]
Below is my updated Dockerfile:-
FROM ubuntu:18.04 AS builder
WORKDIR /opt/app
COPY requirements.txt /opt/app/aws/requirements.txt
RUN mkdir -p /opt/app/aws \
&& apt-get update -yq \
&& apt-get install -y python3.8 python3-pip openjdk-8-jre -yq && apt-get -y clean all \
&& chown -R root:root /opt/app && cd /opt/app/aws && pip3 install -r requirements.txt
FROM alpine
COPY --from=builder /opt/app /opt/app
SHELL ["/bin/bash", "-c"]
CMD ["bash","/opt/app/aws/bin/connector/connect.sh"]
Screenshot of image size:-
After removing unwanted libraries like git, etc and using the multi-stage build, the image is now approx 1.7 GB which I believe is a lot. Any suggestion to improve this?
You have multiple issues going on.
First, each of your RUN apt install is increasing your image size, you should have them all in the same RUN stage, and at the end of the stage, delete all cached apt files.
Second, you're installing unnecessary stuff. Why would you need vim and git for instance? Why are you installing build-essential and other build-related stuff if you're not building anything?
Third, it seems you tried to do a multi-stage build but ended up adding everything to the same image. Read up on python multi-stage builds.
If we consider best practices instead of multiple RUN use single RUN.
For example
RUN apt-get update -yq \
&& apt-get install -y python3-dev build-essential -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt \
&& apt-get purge -y --auto-remove gcc python3-dev build-essential
you can use multistage builds if you don't require git in your final image you can remove in final stage
Also if possible you can use alpine version also.
Try disabling recommended packages of APT with --no-install-recommends, you can read more about it from here.
Now the image is smaller:
FROM ubuntu:18.04 AS builder
RUN apt update -y
RUN apt install python3-pip -y
RUN apt install build-essential automake pkg-config libtool libffi-dev libgmp-dev -y
RUN apt install libsecp256k1-dev -y
RUN apt install openjdk-8-jre-headless -y
RUN apt install git -y
RUN apt install libkrb5-dev -y
RUN apt install vim -y
RUN mkdir /opt/app
RUN chown -R root:root /opt/app
COPY ["requirements.txt","/opt/app/requirements.txt"]
SHELL ["/bin/bash", "-c"]
WORKDIR /opt/app
RUN pip3 install -r requirements.txt && apt-get -y clean all
RUN mkdir /opt/app/
RUN chown -R root:root /opt/app/
RUN cd /opt/app/
RUN git clone -b master https://bitbucket.org/heroes/test.git
CMD ["bash","/opt/app/bin/connect.sh"]
For example, I may have the following Dockfile. When I run docker build, for each RUN, there is a spearate hash (e.g., 1d9c17228a9e), and it runs very fast it had run already. I guess each hash is associated an actual file at the backend. Is it so?
If there are separate files, how they can be loaded in a single virtual machine quickly? Is there any kind of assemble upon starting a new virtual machine (docker container)? Thanks.
$ docker build -t ubtsrv .
Sending build context to Docker daemon 12.29kB
Step 1/22 : FROM ubuntu
---> 1d9c17228a9e
Step 2/22 : RUN rm -rf /etc/dpkg/dpkg.cfg.d/excludes
---> Using cache
---> eb02f606ba08
Step 3/22 : RUN apt-get -y update && dpkg -l | grep ^ii | cut -d' ' -f3 | xargs apt-get install -y --reinstall
---> Using cache
---> 7062816b0023
Step 4/22 : RUN apt-get -y install apt-utils
---> Using cache
---> b89d4cdb791c
Step 5/22 : RUN apt -y update && apt -y upgrade
---> Using cache
---> 8100af2b7f2e
Step 6/22 : RUN apt-get -y install vim
---> Using cache
---> 57c142f99935
Step 7/22 : RUN apt-get -y install man
---> Using cache
---> ddb73e4bbddc
Step 8/22 : RUN apt-get -y install gawk
---> Using cache
---> 7422b4371c16
Step 9/22 : RUN apt-get -y install mawk
---> Using cache
---> 53a01709a342
Step 10/22 : RUN apt-get -y install build-essential
---> Using cache
---> af94947e6922
Step 11/22 : RUN apt-get -y install command-not-found
---> Using cache
---> 20094698a583
Step 12/22 : RUN apt-get -y install clang
---> Using cache
---> e63570058a57
Step 13/22 : RUN apt-get -y install htop
---> Using cache
---> b09fec30dc23
Step 14/22 : RUN apt-get -y install wget
---> Using cache
---> d2794d29f9ee
Step 15/22 : RUN apt-get -y install curl
---> Using cache
---> 2b122c49f3ca
Step 16/22 : RUN wget -q ftp://ftp.gnu.org/gnu/bash/bash-4.4.18.tar.gz && tar xzvf bash-4.4.18.tar.gz && cd bash-4.4.18 && ./configure && make -j && make install && cd .. && rm -rf bash-4.4.18.tar.gz bash-4.4.18
---> Using cache
---> c4bf046aff2a
Step 17/22 : RUN apt-get install -y git
---> Using cache
---> 40ebefa7acda
Step 18/22 : RUN apt-get install -y ack
---> Using cache
---> 05cefb3f0496
Step 19/22 : RUN apt-get install -y info
---> Using cache
---> 3361e4e4e06f
Step 20/22 : RUN apt-get install -y llvm
---> Using cache
---> 50b7c75fc2f5
Step 21/22 : RUN apt-get install -y graphviz
---> Using cache
---> 80f89477930c
Step 22/22 : RUN apt-get install -y cmake
---> Using cache
---> c8320b1b2523
Successfully built c8320b1b2523
Successfully tagged ubtsrv:latest
$ cat Dockerfile
FROM ubuntu
RUN rm -rf /etc/dpkg/dpkg.cfg.d/excludes
RUN apt-get -y update && \
dpkg -l | grep ^ii | cut -d' ' -f3 | xargs apt-get install -y --reinstall
RUN apt-get -y install apt-utils
RUN apt -y update && apt -y upgrade
RUN apt-get -y install vim
RUN apt-get -y install man
RUN apt-get -y install gawk
RUN apt-get -y install mawk
RUN apt-get -y install build-essential
RUN apt-get -y install command-not-found
RUN apt-get -y install clang
RUN apt-get -y install htop
RUN apt-get -y install wget
RUN apt-get -y install curl
RUN wget -q ftp://ftp.gnu.org/gnu/bash/bash-4.4.18.tar.gz && \
tar xzvf bash-4.4.18.tar.gz && \
cd bash-4.4.18 && \
./configure && \
make -j && \
make install && \
cd .. && \
rm -rf bash-4.4.18.tar.gz bash-4.4.18
RUN apt-get install -y git
RUN apt-get install -y ack
RUN apt-get install -y info
RUN apt-get install -y llvm
RUN apt-get install -y graphviz
RUN apt-get install -y cmake
Each hash is a docker layer. It's just a filesystem layer containing the different files added in that step. If you dip into docker internals you can actually take a look at the specific files that were added.
This section on docker caching describes how docker decides what is cached and what is not: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#leverage-build-cache
This tool is a lot of fun: https://github.com/wagoodman/dive an easy way to explore your docker images and check out the contents of each layer.
Let's talk through an example dockerfile:
FROM alpine
WORKDIR /opt/
RUN touch foo && mkdir bar && touch bar/foo
RUN rm foo && touch file.txt
RUN rm -rf bar
Here's the build output:
Building app
Step 1/5 : FROM alpine
---> 196d12cf6ab1
Step 2/5 : WORKDIR /opt/
---> Running in 2098e27c28b9
Removing intermediate container 2098e27c28b9
---> 74634b6a7dcd
Step 3/5 : RUN touch foo && mkdir bar && touch bar/foo
---> Running in f109a620ebfd
Removing intermediate container f109a620ebfd
---> dea70d465cc1
Step 4/5 : RUN rm foo && touch file.txt
---> Running in 367e61e301ba
Removing intermediate container 367e61e301ba
---> 9dcca4810268
Step 5/5 : RUN rm -rf bar
---> Running in d176de336110
Removing intermediate container d176de336110
---> 2e2eee6b9bf8
Successfully built 2e2eee6b9bf8
Successfully tagged docker-fsl_app:latest
If I run docker inspect 2e2eee6b9bf8 (the outputed hash above) docker returns a bunch of data. Included in that are two sections:
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/de87e6b38f95b44137409b5a61b498781473bc05cfd74a01dd641245219c2a1f/diff:/var/lib/docker/overlay2/02d58096fd47908c82edbc34dd0205541e525afe804e88f517ff47ccf3beeee0/diff:/var/lib/docker/overlay2/91fb3592a0da4847071a51e7dda4f48b810a5d1ff0b22e34bb38a0ee52d13d09/diff:/var/lib/docker/overlay2/2e966b19c5984548a6adb172d092dd21b2bb73f6be839baa680dc524d5221063/diff",
"MergedDir": "/var/lib/docker/overlay2/3216972ae99360398a74720226b26b61f0c04142ad6aaa519c1a9dd36f7fb945/merged",
"UpperDir": "/var/lib/docker/overlay2/3216972ae99360398a74720226b26b61f0c04142ad6aaa519c1a9dd36f7fb945/diff",
"WorkDir": "/var/lib/docker/overlay2/3216972ae99360398a74720226b26b61f0c04142ad6aaa519c1a9dd36f7fb945/work"
},
"Name": "overlay2"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:df64d3292fd6194b7865d7326af5255db6d81e9df29f48adde61a918fbd8c332",
"sha256:b9f91d14f5d797f43eeb5b56264cc697641d50dd5e9d17bf89f33cf0694f6559",
"sha256:97195b4b7c22c7eb8720edeb93feeb6901a34018ce1f3c90dc17f861438abf21",
"sha256:0f3d56ac5865b537686b1e324dfbf54edde5afd06e644903ad6b9af42eab01df",
"sha256:5ff5ef92db130446e0af4836ffba8fbf29d06643aa05a104cb4c7a4c9e462fc7"
]
},
I'm on osx. On osx I can run screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty to access the docker vm. Within the vm, I can actually look at the filesystem layers.
These are the layers in reverse order: /var/lib/docker/overlay2/de87e6b38f95b44137409b5a61b498781473bc05cfd74a01dd641245219c2a1f/diff:/var/lib/docker/overlay2/02d58096fd47908c82edbc34dd0205541e525afe804e88f517ff47ccf3beeee0/diff:/var/lib/docker/overlay2/91fb3592a0da4847071a51e7dda4f48b810a5d1ff0b22e34bb38a0ee52d13d09/diff:/var/lib/docker/overlay2/2e966b19c5984548a6adb172d092dd21b2bb73f6be839baa680dc524d5221063/diff
If you go to those locations you'll see just the files added or removed in that layer. So if I go to /var/lib/docker/overlay2/de87e6b38f95b44137409b5a61b498781473bc05cfd74a01dd641245219c2a1f/diff/opt within the vm and run ls -lah. This is the output:
drwxr-xr-x 2 root root 4.0K Jan 14 16:15 .
drwxr-xr-x 3 root root 4.0K Jan 14 16:15 ..
-rw-r--r-- 1 root root 0 Jan 14 16:15 file.txt
c--------- 1 root root 0, 0 Jan 14 16:15 foo
file.txt has been added and foo has been deleted (I think that's why foo doesn't have permissions, the specific details of what a "deleted" file is is unclear to me).
So for every build layer the diff of files added or deleted is added as a lyaer.
I want to install openssl version 1.0.2g in docker image so I wrote Dockerfile:
RUN apt-get update
RUN apt-get install -y build-essential cmake zlib1g-dev libcppunit-dev git subversion && rm -rf /var/lib/apt/lists/*
RUN wget https://www.openssl.org/source/openssl-1.0.2g.tar.gz -O - | tar -xz
WORKDIR /openssl_1.0.2g
RUN ./config --prefix=/usr/local/openssl --openssldir=/usr/local/openssl
and tried to build it:
Removing intermediate container 0666b2c5021f
---> e92f7ed1e3a0
Step 11/14 : WORKDIR /openssl_1.0.2g
Removing intermediate container c8e083d9a453
---> 112f18273e8f
Step 12/14 : RUN ./config --prefix=/usr/local/openssl --openssldir=/usr/local/openssl
---> Running in 4871c00e5c35
/bin/sh: 1: ./config: not found
The command '/bin/sh -c ./config --prefix=/usr/local/openssl --openssldir=/usr/local/openssl' returned a non-zero code: 127
but it doesn't work...
How can I fix it?
What base image do you use to build an image?
It works pretty fine with ubuntu:16.04 base image and the same Dockerfile you provided:
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y build-essential cmake zlib1g-dev libcppunit-dev git subversion wget && rm -rf /var/lib/apt/lists/*
RUN wget https://www.openssl.org/source/openssl-1.0.2g.tar.gz -O - | tar -xz
WORKDIR /openssl-1.0.2g
RUN ./config --prefix=/usr/local/openssl --openssldir=/usr/local/openssl && make && make install