Changes to my dockerfile are not reflected when running `docker build` - docker

Docker beginner here. I'm trying to build a docker image by invoking docker build -t my_image . and making changes to the dockerfile on lines that fail. I'm currently running into a problem on this line:
RUN apt-get install -qy locales
Which was corrected after previously being:
RUN apt-get install -q locales (I forgot the -y which assumes 'yes' inputs.)
But, when I run the build command again, the change to -qy is seemingly not reflected:
---> Running in bc68a3eec929
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
libc-l10n
The following NEW packages will be installed:
libc-l10n locales
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 4907 kB of archives.
After this operation, 20.8 MB of additional disk space will be used.
Do you want to continue? [Y/n] Abort.
The command '/bin/sh -c apt-get install -q locales' returned a non-zero code: 1
What I've tried:
Removing any recently stopped containers
Removing any recently built images (which were all failed)
Using docker build --no-cache -t my_image .
Note that I do not use VOLUME in the dockerfile. I saw some users had problems with this, but it's not my issue.
In short, why are changes to my dockerfile not recognized during the build command?

Based on what you have mentioned, I think the first failure is not due to -qy, it must be due to not updating the repos before running the apt-get commands.
Can you try this and see?
RUN apt-get update && apt-get install -qy locales

Related

Docker build breaks even though nothing in Dockerfile or docker-compose.yml has changed! :( [duplicate]

This question already has answers here:
In Docker, apt-get install fails with "Failed to fetch http://archive.ubuntu.com/ ... 404 Not Found" errors. Why? How can we get past it?
(7 answers)
Closed 1 year ago.
I was really happy when I first discovered Docker. But I keep running into this issue, in which Docker builds successfully initially, but if I try to re-build a container after a few months, it fails to build, even though I made no changes to the Dockerfile and docker-compose.yml files.
I suspect that external dependencies (in this case, some mariadb package necessary for cron) may become inaccessible and therefore breaking the Docker build process. Has anyone also encountered this problem? Is this a common problem with Docker? How do I get around it?
Here's the error message.
E: Failed to fetch http://deb.debian.org/debian/pool/main/m/mariadb-10.3/mariadb-common_10.3.29-0+deb10u1_all.deb 404 Not Found [IP: 151.101.54.132 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Fetched 17.6 MB in 0s (48.6 MB/s)
ERROR: Service 'python_etls' failed to build: The command '/bin/sh -c apt-get install cron -y' returned a non-zero code: 100
This is my docker-compose.yml.
## this docker-compose file is for development/local environment
version: '3'
services:
python_etls:
container_name: python_etls
restart: always
build:
context: ./python_etls
dockerfile: Dockerfile
This is my Dockerfile.
#FROM python:3.8-slim-buster # debian gnutls28 fetch doesn't work in EC2 machine
FROM python:3.9-slim-buster
RUN apt-get update
## install
RUN apt-get install nano -y
## define working directory
ENV CONTAINER_HOME=/home/projects/python_etls
## create working directory
RUN mkdir -p $CONTAINER_HOME
## create directory to save logs
RUN mkdir -p $CONTAINER_HOME/logs
## set working directory
WORKDIR $CONTAINER_HOME
## copy source code into the container
COPY . $CONTAINER_HOME
## install python modules through pip
RUN pip install snowflake snowflake-sqlalchemy sqlalchemy pymongo dnspython numpy pandas python-dotenv xmltodict appstoreconnect boto3
# pip here is /usr/local/bin/pip, as seen when running 'which pip' in command line, and installs python packages for /usr/local/bin/python
# https://stackoverflow.com/questions/45513879/trouble-running-python-script-cron-import-error-no-module-named-tweepy
## changing timezone
ENV TZ=America/Los_Angeles
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
## scheduling with cron
## reference: https://stackoverflow.com/questions/37458287/how-to-run-a-cron-job-inside-a-docker-container
# install cron
RUN apt-get install cron -y
# Copy crontable file to the cron.d directory
COPY crontable /etc/cron.d/crontable
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/crontable
# Apply cron job
RUN crontab /etc/cron.d/crontable
# Create the log file to be able to run tail
#RUN touch /var/log/cron.log
# Run the command on container startup
#CMD cron && tail -f /var/log/cron.log
CMD ["cron", "-f"]
EDIT: So changing the Docker base image from python:3.9-slim-buster to python:3.10-slim-buster in Dockerfile allowed a successful build. However, I'm still stumped why it broke in the first place and what I should do to mitigate this problem in the future.
You may have the misconception that docker will generate the same image every time you try to build from the same Dockerfile. It doesn't, and it's not docker's problem space.
When you build an image, you most likely reach external resources: packages, repository, registries, etc. All things that could be reachable today but missing tomorrow. A security update may be released, and some old packages deleted as consequence of it. Package managers may request the newest version of specific library, so the release of an update will impact the reproducibility of your image.
You can try to minimise this effect by pinning the version of as many dependencies as you can. There are also tools, like nix, that allow you to actually reach full reproducibility. If that's important to you, I'd point you in that direction.

Docker build dependent on host Ubuntu version not on the actual Docker File

I'm facing an issue with my docker build.
I have a dockerfile as follow:
FROM python:3.6
RUN apt-get update && apt-get install -y libav-tools
....
The issue I'm facing is that I'm getting this error when building on ubuntu:20.04 LTS
E: Package 'libav-tools' has no installation candidate
I made some research and found out that ffmpeg should be a replacement for libav-tools
FROM python:3.6
RUN apt-get update && apt-get install -y ffmpeg
....
I tried again without any issue.
but when I tried to build the same image with ffmpeg on ubuntu:16.04 xenial I'm getting a message that
E: Package 'ffmpeg' has no installation candidate
after that, I replace the ffmpeg with libav-tools and it worked on ubuntu:16.04
I'm confused now why docker build is dependant on the host ubuntu version that I'm using and not the actual dockerfile.
shouldn't docker build be coherent whatever the ubuntu version I'm using.
Delete the the existing image and pull it again. Seems you have a old image which may have a different base OS and that is why you are seeing the issue

Docker RUN fails but executing the same commands in the image succeeds

I'm working on a jetson tk1 deployment scheme where I use docker to create the root filesystem which then gets flashed onto the image.
The way this works is I create an armhf image using the nvidia provided sample filesystem with a qemu-arm-static binary which I can then build upon using standard docker tools.
I then have a "flasher" image which copies the contents of the file system, creates an iso image and flashes it onto my device.
The problem that I'm having is that I'm getting inconsistent results between installing apt packages using a docker RUN statement vs entering the image and installing apt packages.
IE:
# docker build -t jetsontk1:base .
Dockerfile
from jetsontk1:base1
RUN apt update
RUN apt install build-essential cmake
# or
RUN /bin/bash -c 'apt install build-essential cmake -y'
vs:
docker run -it jetsontk1:base1 /bin/bash
# apt update
# apt install build-essential cmake
When I install using the docker script I get the following error:
Processing triggers for man-db (2.6.7.1-1) ...
terminate called after throwing an instance of 'std::length_error'
what(): basic_string::append
qemu: uncaught target signal 6 (Aborted) - core dumped
Aborted (core dumped)
The command '/bin/sh -c /bin/bash -c 'apt install build-essential cmake -y'' returned a non-zero code: 134
I have no issues when manually installing applications from when I'm inside the container, but there's no point in using docker to manage this image building process if I can't do apt installs :/
The project can be found here: https://github.com/dtmoodie/tk1_builder
With the current state with the issue as I presented it at commit: 8e22c0d5ba58e9fdab38e675eed417d73ae0aad9

Why dockered centos doesn't recognize pip?

I want to create a container with python and few packages over centos. I've tried to run several commands inside raw centos container. Everything worked fine I've installed everything I want. Then I created Dockerfile with the same commands executed via RUN and I'm getting /bin/sh: pip: command not found What could be wrong? I mean the situation at all. Why everything could be executed in the command line but not be executed with RUN? I've tried both variants:
RUN command
RUN command
RUN pip install ...
and
RUN command\
&& command\
&& pip install ...
Commands that I execute:
from centos
run yum install -y centos-release-scl\
&& yum install -y rh-python36\
&& scl enable rh-python36 bash\
&& pip install django
UPD: Full path to the pip helped. What's wrong?
You need to install pip first using
yum install python-pip
or if you need python3 (from epel)
yum install python36-pip
When not sure, ask yum:
yum whatprovides /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : #System
Matched from:
Filename : /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : updates
Matched from:
Filename : /usr/bin/pip
python2-pip-18.0-4.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : fedora
Matched from:
Filename : /usr/bin/pip
This output is from Fedora29, but you should get similar result in Centos/RHEL
UPDATE
From comment
But when I execute same commands from docker run -ti centos everything
is fine. What's the problem?
Maybe your PATH is broken somehow? Can you try full path to pip?
As it has already been mentioned by #rkosegi, it must be a PATH issue. The following seems to work:
FROM centos
ENV PATH /opt/rh/rh-python36/root/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN yum install -y centos-release-scl
RUN yum install -y rh-python36
RUN scl enable rh-python36 bash
RUN pip install django
I "found" the above PATH by starting a centos container and typing the commands one-by-one (since you've mentioned that it is working).
There is a nice explanation on this, in the slides of BMitch which can be found here: sudo-bmitch.github.io/presentations/dc2018/faq-stackoverflow.html#24
Q: Why doesn't RUN work?
Why am I getting ./build.sh is not found?
RUN cd /app/srcRUN ./build.sh
The only part saved from a RUN is the filesystem (as a new layer).
Environment variables, launched daemons, and the shell state are all discarded with the temporary container when pid 1 exits.
Solution: merge multiple lines with &&:
RUN cd /app/src && ./build.sh
I know this was asked a while ago, but I just had this issue when building a Docker image, and wasn't able to find a good answer quickly, so I'll leave it here for posterity.
Adding the scl enable command wouldn't work for me in my Dockerfile, so I found that you can enable scl packages without the scl command by running:
source /opt/rh/<package-name>/enable.
If I remember correctly, you won't be able to do:
RUN source /opt/rh/<package-name>/enable
RUN pip install <package>
Because each RUN command creates a different layer, and shell sessions aren't preserved, so I just ran the commands together like this:
RUN source /opt/rh/rh-python36/enable && pip install <package>
I think the scl command has issues running in Dockerfiles because scl enable <package> bash will open a new shell inside your current one, rather than adding the package to the path in your current shell.
Edit:
Found that you can add packages to your current shell by running:
source scl_source enable <package>

The command '/bin/sh -c apt-get install erlang' returned a non-zero code: 1

I am a beginner in Docker and am using Ubuntu 18.04 as a Host machine.
While searching for the solution the only thing I got was to increase the VM disk size as it is happening due to low memory.
I am not using a VM. Disk size available is 87+ GB.
Below is my docker file content.
FROM ubuntu
RUN apt-get update
RUN apt-get install erlang
EXPOSE 15672
On triggering to build, I am getting the following error:
Sending build context to Docker daemon 1.694GB
Step 1/4 : FROM ubuntu
---> cd6d8154f1e1
Step 2/4 : RUN apt-get update
---> Using cache
---> 04473efa791a
Step 3/4 : RUN apt-get install erlang
---> Running in bb7a0664bb20
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
adwaita-icon-theme at-spi2-core ca-certificates ca-certificates-java dbus
dconf-gsettings-backend dconf-service default-jre-headless emacsen-common
erlang-asn1 erlang-base erlang-common-test erlang-corba erlang-crypto
.
.
.
notification-daemon openjdk-11-jre-headless openssl shared-mime-info
ubuntu-mono ucf x11-common xdg-user-dirs xkb-data
0 upgraded, 202 newly installed, 0 to remove and 1 not upgraded.
Need to get 137 MB of archives.
After this operation, 657 MB of additional disk space will be used.
Do you want to continue? [Y/n] Abort.
The command '/bin/sh -c apt-get install erlang' returned a non-zero code: 1
apt-get asked you for confirmation (Do you want to continue? [Y/n] Abort.) which docker was apparently unwilling to give. So use apt-get -y install erlang instead.

Resources