It would seem that apt-get is having issues connecting with the repository servers. I suppose it is likely compatibility issues, as mentioned here, however the proposed solution of apt-get clean does not work for me. Also I am surprised, if this is the case, that there are not more people having my issue.
MWE
Dockerfile
FROM debian:jessie
RUN apt-get clean && apt-get update && apt-get install -y --no-install-recommends \
git
$ docker build .
docker build .
Sending build context to Docker daemon 2.048 kB
Step 0 : FROM debian:jessie
---> 4a5e6db8c069
Step 1 : RUN apt-get clean && apt-get update && apt-get install -y --no-install-recommends git
---> Running in 43b93e93feab
Get:1 http://security.debian.org jessie/updates InRelease [63.1 kB]
... some omitted ...
Get:6 http://httpredir.debian.org jessie-updates/main amd64 Packages [3614 B]
Fetched 9552 kB in 7s (1346 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
... some omitted ...
0 upgraded, 26 newly installed, 0 to remove and 0 not upgraded.
Need to get 13.2 MB of archives.
After this operation, 64.0 MB of additional disk space will be used.
Get:1 http://security.debian.org/ jessie/updates/main libgnutls-deb0-28 amd64 3.3.8-6+deb8u2 [694 kB]
... some omitted ...
Get:5 http://httpredir.debian.org/debian/ jessie/main libnettle4 amd64 2.7.1-5 [176 kB]
Err http://httpredir.debian.org/debian/ jessie/main libffi6 amd64 3.1-2+b2
Error reading from server. Remote end closed connection [IP: 176.9.184.93 80]
... some omitted ...
Get:25 http://httpredir.debian.org/debian/ jessie/main git amd64 1:2.1.4-2.1 [3624 kB]
Fetched 13.2 MB in 10s (1307 kB/s)
E: Failed to fetch http://httpredir.debian.org/debian/pool/main/libf/libffi/libffi6_3.1-2+b2_amd64.deb Error reading from server. Remote end closed connection [IP: 176.9.184.93 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
The command '/bin/sh -c apt-get clean && apt-get update && apt-get install -y --no-install-recommends git' returned a non-zero code: 100
Please note that I also posted here with a different issue. I believe it to be unrelated, but it may well actually be.
For whoever is having an issue with this, this is my attempt to "fix" the issue by swapping out httpredir with a single working domain whenever the Dockerfile is being built:
FROM debian:je...
# Insert this line before "RUN apt-get update" to dynamically
# replace httpredir.debian.org with a single working domain
# in attempt to "prevent" the "Error reading from server" error.
RUN sed -i "s/httpredir.debian.org/`curl -s -D - http://httpredir.debian.org/demo/debian/ | awk '/^Link:/ { print $2 }' | sed -e 's#<http://\(.*\)/debian/>;#\1#g'`/" /etc/apt/sources.list
# Continue with your apt-get update...
RUN apt-get update...
What this command does is:
Curl the http://httpredir.debian.org/demo/debian/ from the building machine to get the headers from debian demo page (-s is silent, don't output. -D is to dump headers)
Extract the headers, find the Link header fragment. This contains the best route as recommended by httpredir.
The last sed -e ... is to extract out the domain name of the link in step 2.
Then finally, the domain found in step 3 is being feed into the global sed command, and replace the domain httpredir.debian.org found in /etc/apt/sources.list.
This is not a fix, but rather a simple hack to (greatly) reduce the chances of failed build. And... pardon me if it looks weird, as it's my virgin sed & piping attempt.
Edit
On a side note, if the domain that it picks simply too slow or not responding as it should, you may want to do it manually by
Visit http://httpredir.debian.org/demo.html, and you should see a link there like http://......./debian/. For example, at the point of writing, I saw http://mirrors.tuna.tsinghua.edu.cn/debian/
Instead of the long RUN sed -i.... command, use this instead:
RUN sed -i "s/httpredir.debian.org/mirrors.tuna.tsinghua.edu.cn/" /etc/apt/sources.list
I added apt-get clean to my dockerfile before the apt-get update line, it seems to have done the trick.
I guess I have no way of knowing whether or not it was the extra command or if it was luck that fixed my build, but I took the advice from https://github.com/CGAL/cgal-testsuite-dockerfiles/issues/19
The httpredir.debian.org mirror is "magic" in that it will load-balance and geo-ip you to transparent increase performance and availability. I would therefore immediately suspect it of causing your problem, or at least be the first thing to rule out.
I would check if you could:
Still reproduce the problem; httpredir.debian.org will throw out "bad" mirrors from its internal lists so your issue may have been temporary.
Reproduce the problem with a different, non-httpredir.debian.org mirror. Try something like ftp.de.debian.org. If it then works with this mirror, do please contact the httpredir.debian.org maintainer and report the issue to them. They are quite responsive and open to bug reports.
For those visiting with similar issues, using the --no-cache flag in the docker build may help. Similar issues (though not this exact one) can occur if the apt-get update is old and not being recalled sue to caching.
Not enough reputation to comment on previous answers, so I will (confusingly) add a new answer:
I don't think hardcoding a single mirror is really a viable solution, since as for example seen here, there's a reason debian implemented the whole httpredir thing -- mirrors go down or out of date.
I've dealt with this issue a lot of times, and the logs always indicate that docker's actually running the apt-get command, which means --no-cache is unlikely to be fixing it -- it's just that if you rebuild, httpredir is likely to pick a different mirror, even if you don't change anything in your docker file, and the build will work.
I was able to solve this problem by adding -o 'Acquire::Retries=3' to the apt-get install command.
RUN apt-get update && apt-get install -o 'Acquire::Retries=3' -y git
Seems that it automatically retries to acquire the package from another mirror.
apt-get acquire documentation
EDIT: After some investigation I found that my problem was related with the proxy server I was using. I'll leave the answer here anyway in case it helps anyone.
Related
When i write [sudo apt-get update]
First error : https://pkg.jenkins.io/debian-stable binary/ Packages
server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
Second error : W: The repository 'http://pkg.jenkins.io/debian-stable binary/ Release' does not have a Release file.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: Failed to fetch https://pkg.jenkins.io/debian-stable/binary/Packages server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
E: Some index files failed to download. They have been ignored, or old ones used instead.
Try adding a '[trusted=yes]' tag in your /etc/apt/sources.list (or sources.list.d/file) file at the line related to http://pkg.jenkins.io/debian-stable...
At the end, in your souces.list file, you should end up with something like the following:
deb [trusted=yes] http://pkg.jenkins.io/.....
EDIT: this shell command should do the trick:
sed -i 's/^deb /deb [trusted=yes] /' /etc/apt/sources.list
then reload your apt cache with:
apt update
Hope it helps.
Follow Jenkins Debian Packages for all the commands to install Jenkins in Ubuntu.
Install Java
Update your local package index
sudo apt-get install jenkins
I have installed it on Ubuntu 22.04.1 LTS and created a Video Tutorial of it.
Docker layers are additive, meaning that purging packages in a subsequent layer will not remove them from the previous one, and thus from the image.
In my understanding, what happens is that an additional masking layer is created, in which those packages are not shown anymore.
Indeed, if I build the MWE below and then run apt list --installed | grep libpython3.9-minimal after the purging, the package cannot be found.
However, I still don't understand entirely what happens under the hood.
Are the packages effectively still there, but masked?
If one of these packages causes vulnerability issues, is purging=masking a solution, or will we still have issues while being unaware of them (because the package seems to be removed and so does not show in an image scan, but is still there)?
FROM openjdk:11
# Remove packages
RUN apt-get purge -y libpython3.9-minimal
RUN apt-get autoremove -y
ENTRYPOINT ["/bin/bash"]
Hi I have an docker image and I noticed after yesterday I cannot run it as it run the error that some repository url is invalid. And when I check I saw that it is true the debian repository structure has already been change https://packages.microsoft.com/debian/10/ on 16/6/2021. Does any one know the solution?
FROM python:3.7-buster
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update && ACCEPT_EULA=Y apt-get install -y \
msodbcsql17 \
unixodbc-dev \
openssh-server \
nginx-full \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
error
It is a general issue, the issue is partially solved but the download speed is very slow. Here is a detailed report from bleepingcomputer blog post :Microsoft Linux repos suffer day-long outage, still recovering
Although Microsoft's initial ETA to resolve the issue was "two hours or so," the problem spanned well over 14 hours, with users continuing to experience degraded performance.
Microsoft engineer Rahul Bhandari stepped in on the same GitHub thread to confirm:
"Our infra team is working on this. There is an issue with some of the mirrors on packages.microsoft.com so as per them, the current ETA to resolve this issue is in next two hours or so," said Bhandari.
Bhandari later confirmed that some storage issues were the root cause of these problems.
Microsoft's principal engineering manager, Ravindra Bhartiya said:
"We had an incident with packages.microsoft.com that resulted in packages being unavailable."
"Our engineering team has mitigated the issue and our internal data shows improvement in the availability"
"If you still have problems, please provide us more information (output of "apt-get update|install") and we can investigate it further," said Bhartiya.
But even into today, at the time of writing, users are complaining about slow download speeds when retrieving packages from Microsoft's repo.
I'm currently working out a Dockerfile. So I am trying to build out a Centos 7.6 base image and I get a failure when I try to use any yum packages. I'm not sure what the cause of this is.
I've already attempted to make the user root to see if that makes a difference but it doesn't help the situation. I've also done a docker pull centos to recieve the latest version of centos.
I simplified the code and still the same error.
FROM centos
ARG MONGO-RAILS-VERSION="0.0"
RUN yum install vim
# curl -L get.rvm.io | bash -s stable \
# rvm install 2.3.1 \
# rvm use 2.3.1 --default \
# gem install rails
I get an error that looks something like this
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
subscription-manager repos --disable=<repoid>
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: base/7/x86_64
Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container error was
14: curl#7 - "Failed to connect to 2001:1b48:203::4:10: Network is unreachable"
The command '/bin/sh -c yum install vim' returned a non-zero code: 1
You may want to have a look for Set build-time variables (--build-arg):
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 --build-arg FTP_PROXY=http://40.50.60.5:4567 .
I've created a Dockerfile for an application I'm building that has a lot of large apt-get package dependencies. It looks something like this:
FROM ubuntu:15.10
RUN apt-get update && apt-get install -y \
lots-of-big-packages
RUN install_my_code.sh
As I develop my application, I keep coming up with unanticipated package dependencies. However, since all the packages are in one Dockerfile instruction, even adding one more breaks the cache and requires the whole lot to be downloaded and installed, which takes forever. I'm wondering if there's a better way to structure my Dockerfile?
One thought would be to put a separate RUN apt-get update && apt-get install -y command for each package, but running apt-get update lots of times probably eats up any savings.
The simplest solution would be to just add a second RUN apt-get update && apt-get install -y right after the first as a catchall for all of the unanticipated packages, but that divides the packages in an unintuitive way. (ie, "when I realized I needed it") I suppose I could combine them when dependencies are more stable, but I find I'm always overly optimistic about when that is.
Anyway, if anyone has a better way to structure it I'd love to hear it. (all of my other ideas run against the Docker principles of reproducibility)
I think you need to run apt-get update only once within the Dockerfile, typically before any other apt-get commands.
You could just first have the large list of known programs to install, and if you come up with a new one then just add a new RUN apt-get install -y abc to you Dockerfile and let docker continue form the previously cached command. Periodically (once a week, one a month?) you could re-organize them as you see fit or just run everything in a single command.
I suppose I could combine them when dependencies are more stable, but
I find I'm always overly optimistic about when that is.
Oh you actually mentioned this solution already, anyway there is no harm doing these "tweaks" every now and then. Just run apt-get update only once.