Short Version
Debian's httpredir.debian.org mirror service causes my Docker builds to fail very frequently because apt-get can't download a package or connect to a server or things like that. Am I the only one having this problem? Is the problem mine, Debian's, or Docker's? Is there anything I can do about it?
Long Version
I have several Dockerfiles built on debian:jessie, and Debian by default uses the httpredir.debian.org service to find the best mirror when using apt-get, etc. Several months ago, httpredir was giving me continual grief when trying to build images. When run inside a Dockerfile, apt-get using httpredir would almost always mess up on a package or two, and the whole build would fail. The error usually looked like a mirror was outdated or corrupt in some way. I eventually stopped using httpredir in all my Dockerfiles by adding the following lines:
# don't use httpredir.debian.org mirror as it's very unreliable
RUN echo deb http://ftp.us.debian.org/debian jessie main > /etc/apt/sources.list
Today went back to trying httpredir.debian.org again because ftp.us.debian.org is out of date for a package I need, and sure enough it's failing on the Docker Hub:
Failed to fetch http://httpredir.debian.org/debian/pool/main/n/node-retry/node-retry_0.6.0-1_all.deb Error reading from server. Remote end closed connection [IP: 128.31.0.66 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Here's the apt-get command I'm running in this case, though I've encountered it with many others:
RUN apt-get update && apt-get install -y \
build-essential \
chrpath \
libssl-dev \
libxft-dev \
libfreetype6 \
libfreetype6-dev \
libfontconfig1 \
libfontconfig1-dev \
curl \
bzip2 \
nodejs \
npm \
git
Thanks for any help you can provide.
I just had the same problem today, when rebuilding a Dockerfile I had not build in a while.
Adding this line before the apt-get install seems to do the trick:
RUN apt-get clean
Got the idea here:
https://github.com/docker/hub-feedback/issues/556
https://github.com/docker-library/buildpack-deps/issues/40
https://github.com/Silverlink/buildpack-deps/commit/be1f24eb136ba87b09b1dd09cc9a48707484b417
From the discussion on this question, and my experience dealing with this issue repeatedly over a number of months, apt-get clean seems to not in and of itself help, but the fact you're rebuilding (i.e. httpredir usually picks a different mirror) gets it to work. Indeed without exception manually triggering a rebuild or two has resulted in a successful build.
That is obviously not a viable solution, though. So, no, I don't have a solution, but I also don't have enough reputation to mark this as a duplicate.
Related
We use Docker to well define the build environment and help with deterministic builds but on my machine I get a tiny change in the build results using Docker but not when not using Docker.
I did pretty extensive testing and am out of ideas :(
I tested on the following systems:
A: My new PC without Docker
AD1: My new PC with Docker, using our Dockerfile based on ubuntu:18.04 compiled "a year ago"
AD2: My new PC with Docker, using our Dockerfile based on ubuntu:19:10 compiled now
B: My laptop (that I had copied the disk from to my new PC) without Docker
BD: My laptop with Docker
CD1: Co-worker's laptop with Docker, using our Dockerfile based on ubuntu:18.04 compiled "a year ago"
CD2: Co-worker's laptop with Docker, using our Dockerfile based on ubuntu:19:10 compiled now
DD: A Digital Ocean VPS with our Dockerfile based on ubuntu:18.04 compiled now
In all scenarios we got either of two build results I will name variant X and Y.
We got variant X using A, B, CD1, CD2 and DD.
We got variant Y using AD1, AD2 and BD.
The issue keeps being 100% reproducible since several releases of our Android app. It did not go away when I updated my Docker from 19.03.6 to 19.03.8 to match my co-worker's version. We both had Ubuntu 19.10 back then and I now keep getting the issue with Ubuntu 20.04.
I always freshly cloned our project into a new folder, used disorderfs to eliminate file system sorting issues and mounted the folder into the docker container.
I doubt it's relevant but we are using this Dockerfile:
FROM ubuntu:18.04
RUN dpkg --add-architecture i386 && \
apt-get update -y && \
apt-get install -y software-properties-common && \
apt-get update -y && \
apt-get install -y wget \
openjdk-8-jre-headless=8u162-b12-1 \
openjdk-8-jre=8u162-b12-1 \
openjdk-8-jdk-headless=8u162-b12-1 \
openjdk-8-jdk=8u162-b12-1 \
git unzip && \
rm -rf /var/lib/apt/lists/* && \
apt-get autoremove -y && \
apt-get clean
# download and install Android SDK
ARG ANDROID_SDK_VERSION=4333796
ENV ANDROID_HOME /opt/android-sdk
RUN mkdir -p /opt/android-sdk && cd /opt/android-sdk && \
wget -q https://dl.google.com/android/repository/sdk-tools-linux-${ANDROID_SDK_VERSION}.zip && \
unzip *tools*linux*.zip && \
rm *tools*linux*.zip && \
yes | $ANDROID_HOME/tools/bin/sdkmanager --licenses
Also here are the build instructions I run and get different results. The diff itself is can be found here.
Edit: I also filed it as a bug on the docker repo.
Docker is not fully architecture-independent. For different architecture you may have more or less minute differences. Usually it should not affect anything important but may change some optimisation decisions of a compiler and such things. It is more visible if you try very different CPUs like AMD64 vs ARM. For Java it should not matter but it seems that at least sometimes it matters.
Another thing is network and DNS. When you do apt-get, wget and other such things then it downloads code or binary from network. It may differ depending on which DNS you use (which may lead to different server or different repo url) and there can be some minute differences. Theoretically there should be no difference but practically there can be difference sometimes like when they roll out new version and it's visible only on some nodes or something bad happened or you have some cache/proxy in between and connect through that and it caches etc.
Also the latter can create differences that appear in time. Like app is compiled on one month and someone tries to verify few weeks or months later and apt-get installs other versions of libraries and in effect there are minute differences.
I'm not sure which applies here but I have some ideas:
may try to make some small changes to the app so in effect it will again build same on most of popular CPU's, do extensive testing, and then list architectures on which it can be verified
make verification process a little more complex and non-free so users should have to run a server instance (on AWS or Google or Azure or Rackspace or other) with specified architecture and build and verify there - may try and specify on which types of machines exacly result will be the same and what are minimal requirements (as it may or may not run on free-plan instances)
check diff of created images content (not only apk but full system image), maybe there will be something important that differs between docker images on different machines producing different results
try to find as small as possible initial image and don't allow apt-get or other things automatically install dependencies with newest version but specify all dependencies and their versions
I was one that was having trouble with this above mentioned issue where after a "kubectl delete -f" my container would be stuck on "Terminating".
I could not see anything in the Docker logs to help me narrow it down.
After a Docker restart the pod would be gone and i could continue as usual, but this is not the way to live your life.
I Googled for hours and finally got something on a random post somewhere.
Solution:
When i installed Kubernetes on Ubuntu 16.04 i followed a guide that said to install "docker.io".
In this article it said to remove "docker.io" and rather use a "docker-ce or docker-ee" installation.
BOOM, i did it, disabled the swappoff function and my troubles are no more.
I hope this helps people that are also stuck with this.
Cheers
As kleuf mentioned in comments, the solution to the stuck docker container in his case was the following:
When i installed Kubernetes on Ubuntu 16.04 i followed a guide that
said to install "docker.io". In this article it said to remove
"docker.io" and rather use a "docker-ce or docker-ee" installation.
sudo apt-get remove docker docker-engine docker-ce docker.io
sudo apt-get remove docker docker-engine docker.io -y
curl -fsSL download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce -y
sudo service docker restart
BOOM, i did it, disabled the swappoff function and my troubles are no
more.
I hope this helps people that are also stuck with this.
Consider the following Dockerfile:
FROM alpine:edge
EXPOSE \
# web portal
8080 \
# backdoor
8081
Built like so:
docker build .
We observe such output:
Sending build context to Docker daemon 17.1TB
Step 1/2 : FROM alpine:edge
---> 7463224280b0
Step 2/2 : EXPOSE 8080 8081
---> Using cache
---> 7953f8df04d9
[WARNING]: Empty continuation line found in:
EXPOSE 8080 8081
[WARNING]: Empty continuation lines will become errors in a future release.
Successfully built 7953f8df04d9
So, given that it'll soon become illegal to put comments in the middle of a multi-line section: what's the new recommended way to comment multi-line commands?
This is particularly important for RUN commands, since we are encouraged to reduce image layers by &&ing commands together.
Not sure exactly when this was introduced, but I'm currently experiencing this in version:
🍔 docker --version
Docker version 17.07.0-ce, build 8784753
I'm using Docker's edge release stream, so maybe this will not yet look familiar if you are using Docker stable.
17.07.0-ce started to warn on empty continuation lines. However, it incorrectly treated comment-only lines as empty. This is fixed in moby#35004, and being included in the 17.10.0-ce.
On top of what others have said above (the error might be related to comments inside continuation blocks and/or windows cr/lf characters = use dos2unix), this message can also show up when your last command ends with a backslash \ character. For example, if you have this:
RUN apt-get update \
&& apt-get upgrade \
&& apt-get -y install build-essential curl gnupg libfontconfig ca-certificates bzip2 \
&& curl -sL https://deb.nodesource.com/setup_16.x | bash - \
&& apt-get -y install nodejs \
&& apt-get clean \
&& rm -rf /tmp/* /var/lib/apt/lists/* \
Notice the last \ at the end. This will get you the same error:
docker [WARNING]: Empty continuation line found in:
So, just remove that last \ and you're all set.
You could break the RUN commands out on to separate lines, and then use the experimental (at time of writing*) --squash command.
* note that it's been suggested that multi-stage builds might make --squash redundant. That is actively being discussed here, with a proposal open here.
If, like me, you came here with the same error but no comments in your Dockerfile's RUN item, you have either mixed or DOS line endings. Run dos2unix on your Dockerfile and that'll fix it.
Edit: Solved- typo
I have a Dockerfile that successfully creates a virtualenv using virtualenvwrapper (along with setting up a heap of "standard" settings/packages in our normal environment). I am using the resulting image as a "base image" for further use. All good so far. However, the following Dockerfile (based of the first image, "base_image_14.04") falls down at the last line:
FROM base_image_14.04
USER root
RUN DEBIAN_FRONTEND=noninteractive \
apt-get update && apt-get install -y \
libproj0 libproj-dev \
libgeos-c1v5 libgeos-dev \
libjpeg62 libjpeg-dev \
zlib1g zlib1g-dev \
libfreetype6 libfreetype6-dev \
libgdal20 libgdal-dev \
&& rm -rf /var/lib/apt/lists
USER webdev
RUN ["/bin/bash", "-ic", "mkproject maproxy"]
EXPOSE 80
WORKDIR $PROJECT_HOME/mapproxy
ADD ./requirements.txt .
RUN ["/bin/bash", "-ic", "workon mapproxy && pip install -r requirements.txt"]
The "mkproject mapproxy" works fine. If I comment out the last line it builds successfully and I can spin up the container and run "workon mapproxy" manually, not a problem. But when I try and build with the last line, it gives a workon error:
ERROR: Environment 'mapproxy' does not exist. Create it with 'mkvirtualenv mapproxy'.
workon is being called, but for some reason it can't find the mapproxy virtualenv.
WORKON_HOME & PROJECT_HOME both exist (defined in the parent image) and point to the correct locations (and are used successfully by "mkproject mapproxy").
So why is workon returning an error when the mapproxy virtualenv exists? The same error happens when I isolate that last line into a third Dockerfile building on the second.
Solved: It was a simple typo. mkproject maproxy instead of mapproxy. :sigh:
I am trying to build a docker image and am running into similar problems.
First question was why use a virtual env in docker? The main reason in a nutshell is to minimize effort to migrate an existing and working approach into a docker container. I will eventually use docker-compose, but I wanted to start by getting my feet wet with it all in a single docker container.
In my first attempt I installed almost everything with apt-get, including uwsgi. I installed my app "globally" with pip3. The app has command line functionality and a separate flask web app, hence the need for uwsgi. The command line functionality works, but when I make a request of the flask app uwsgi / python has a problem with locale: Fatal Python error: Py_Initialize: Unable to get the locale encoding and ImportError: No module named 'encodings
I have stripped away all my app specific additions to narrow down the problem. This is the Dockerfile I'm using:
# Docker image definition for testing
FROM ubuntu:xenial
# Create a user
RUN useradd -G sudo -ms /bin/bash tester
RUN echo 'tester:password' | chpasswd
WORKDIR /home/tester
# Skipping apt-get update to save some build time. Some are kept
# to insure they are the same as on host setup.
RUN apt-get install -y python3 python3-dev python3-pip \
virtualenv virtualenvwrapper sudo nano && \
apt-get clean -qy
# After above, can we use those installed in rest of Dockerfile?
# Yes, but not always, such as with virtualenvwrapper. What about
# virtualenv? How do you "source" the script? Doesn't appear to be
# installed, as bash complains "source needs a single parameter"
ENV VIRTUALENVWRAPPER_PYTHON /usr/bin/python3
ENV VIRTUALENVWRAPPER_VIRTUALENV /usr/bin/virtualenv
RUN ["/bin/bash", "-c", "source", "/usr/share/virtualenvwrapper/virtualenvwrapper.sh"]
# Create a virtualenv so uwsgi can find locale
# RUN mkdir /home/tester/.virtualenv && virtualenv -p`which python3` /home/bts_tools/.virtualenv/bts_tools
RUN mkvirtualenv -p`which python3` bts_tools && \
workon bts_tools && \
pip3 --disable-pip-version-check install --upgrade bts_tools
USER tester
ENTRYPOINT ["/bin/bash"]
CMD ["--login"]
The build fails on the line I try to source the virtualenvwrapper script. Bash complains source needs an argument - the file to be sourced. So I comment out the RUN lines and it builds without error. When I run the resulting container I see all the additions to the ENV that virtualenvwrapper makes (you can see all of them by executing the "set" command without any args), and the script to be sourced is there too.
So my question is why doesn't docker find them? How does the docker build process work if the results of any previous RUNs or ENVs aren't applied for subsequent use in the Dockerfile? I know some things are applied and work, for example if you apt-get nginx you can refer to /etc/nginx or alter things under that folder. You can create a user and set it's password or cd into its home folder for example. If I move the WORKDIR before the RUN useradd -G I see a warning from useradd the home folder already exists. I tried to use the "time" program to time how long it takes to do various things in the Dockerfile and docker complains it can't find 'time'.
So what exactly is going on? I have spent the last 3 days trying to figure this out. It just shouldn't be this difficult. What am I missing?
Parts of the bts_tools flask app worked when I wasn't using virtual envs. Most of the app didn't work, and the issue was this locale problem. Since everything works on the host outside of docker, and after trying to alter the PATH, PYTHONHOME, PYTHONPATH in my uwsgi start script to overcome the dreaded "locale encoding" fatal error, I decided to try to replicate the host setup as closely as possible since that didn't have the locale issue. When I have had that problem before I could run dpkg-reconfigure python3 or fix with changes to PATH or ENV settings. If you google the problem you'll see many people have difficulties with python & locale. It's almost enough reason to avoid using python!
I posted this elsewhere about locale issue, if it helps.
I'm trying to write a Dockerfile to build Kaldi (an open source speech recognition system) based on the "buildpack-deps:jessie-scm" image. This is my Dockerfile:
FROM buildpack-deps:jessie-scm
RUN apt-get update
RUN apt-get install -y python2.7 libtool python libtool-bin make
RUN mkdir /opt/kaldi
RUN git clone https://github.com/kaldi-asr/kaldi.git /opt/kaldi --depth=1
RUN ln -s -f bash /bin/sh
WORKDIR /opt/kaldi
RUN cd tools/extras && ./check_dependencies.sh
RUN cd tools && ./install_portaudio.sh
RUN cd tools && make -j 4 && make clean
RUN cd src && ./configure --shared --use-cuda=no && make depend && make -j 4 && make -j 4 online onlinebin online2 && make clean
This fails at the "check_dependencies.sh" script, which is complaining that various base dependencies aren't installed (g++, zlib, automake, autoconf, patch, bzip2) ... but the description of the image that I'm basing this on (https://github.com/docker-library/buildpack-deps/blob/587934fb063d770d0611e94b57c9dd7a38edf928/jessie/Dockerfile) suggests that all of these dependencies should be available in the base image. Why is my build failing here?
I should note that I've attempted these build steps on a bare Debian Jessie system with the required dependencies installed and they were successful there, so I don't think it's a problem with the build scripts provided with Kaldi, but definitely a Docker-related issue.
Looks like I've misunderstood the different tags for the buildpack-deps image. The tags *-scm don't add source control tools to the bundled build tools and libraries, they only apply the source control tools, and the build tools are then added on top of those tools. So I should just be using buildpack-deps:jessie not buildpack-deps:jessie-scm (the latter of which is basically a bare Debian system with git etc installed but nothing else).