I try to install a package via pip. However, every usage of pip, which needs an Internet connection (even the upgrade below) leads to a ReadTimeoutError. My basic Dockerfile which is working on another system is as follows:
FROM python:3-alpine
RUN wget google.com
RUN pip3 -V
RUN pip3 install --upgrade pip
Line two shows shows that I have an Internet connection. Output:
Connecting to google.com (216.58.206.14:80)
Connecting to www.google.com (108.177.126.103:80)
index.html 100% |*******************************| 10582 0:00:00 ETA
Line three shows that pip is installed. Output:
pip 10.0.1 from /usr/local/lib/python3.6/site-packages/pip (python 3.6)).
However, line four leads to:
Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=100.0)",)': /simple/pip/
I already tried to:
reinstall Docker
increase the default timeout with "--default-timeout=100" (which is why the read timeout is 100 in above's error message.)
I read that there are problems with pip when you are behind a Proxy, which is not the case here. Do you have any other ideas what is wrong here?
Thanks in advance!
There are two possible solutions:
This may not be a problem with network:
I just found a solution in another place:
Please add two lines after From layer
ENV http_proxy http://proxy-chain.xxx.com:911/
ENV https_proxy http://proxy-chain.xxx.com:912/
Or change to another mirror source:
Add the following commands before pip install
RUN mkdir ~/.pip && \
cd ~/.pip/ && \
echo "[global] \ntrusted-host = pypi.douban.com \nindex-url = http://pypi.douban.com/simple" > pip.conf
If it is because of the network, cuz it takes time to connect the source maybe:
Try to add the following limitation after pip install:
--default-timeout=1000 --no-cache-dir
This may also be caused when your host's network interface uses a smaller than default MTU (which is usually 1500), and the docker container does not know about this and uses a larger MTU.
We faced this on a Gitlab runner that was connected to a VLAN (MTU of 1400). Connections to some hosts worked just fine but some gave persistent issues (amongst which: PyPI, resulting in the error mentioned here).
The solution was to change the MTU for docker and all problems went away.
/etc/docker/daemon.json
{
"mtu": 1400
}
Related
I am trying to build a custom map server tile server by following this tutorial on switch2osm.
Instead of using ubuntu as described in the tutorial, I am using docker for everything (postgis, apache, etc)
I am trying to build an image where apache and renderd are configured (I followed the instructions found here)
Here is my Dockerfile :
FROM httpd:2.4
RUN apt-get update && \
apt-get install -y libapache2-mod-tile renderd
RUN a2enmod tile
RUN a2enconf renderd
CMD ["renderd", "-f", "&&", "httpd-foreground"]
I keep having this error after building and creating the container :
renderd[1]: Initialising unix server socket on /run/renderd/renderd.sock
socket bind failed for: /run/renderd/renderd.sock
I know that's a user right issue but I dont see how to fix it.
Please can anyone help me solves this issue ?
I saw the same problem. I've partially resolved it by changing the owner of /run/renderd via sudo chown -R osm:osm /run/renderd
Then restarting the renderd process.
I've further tried (and failed) to make this permanent by modifying the file:
/etc/systemd/system/multi-user.target.wants/renderd.service
and specify the user there as well
[Service] ExecStart=/usr/bin/renderd -f User=osm
I do believe the above 'fix' has worked in the past, but doesn't seem to work now on Ubuntu 22.04
I am using a customized version of Ubuntu18.04 and I have a docker container where I tried to install a .deb package for the usage of a FLIR camera. To do so I downloaded from this website the file spinnaker-2.5.0.80-Ubuntu18.04-arm64-pkg.tar.gz, as suggested for Ubuntu18.04.
I followed those instructions to install everything, which basically means the following commands:
apt-get install libusb-1.0-0
tar xvfz spinnaker-2.5.0.80-Ubuntu18.04-arm64-pkg.tar.gz
cd spinnaker-2.5.0.80-arm64
./install_spinnaker_arm.sh
During this process the first errors arose, which I could fix through the installation of iputils-ping and lsb-release inside the docker container:
apt install iputils-ping
apt install -y lsb-release
However, afterwards another error arose:
/var/lib/dpkg/tmp.ci/preinst: 28 /var/lib/dpkg/tmp.ci/preinst: errmsg: not found
dpkg: error processing archive libspinnaker_2.5.0.80_arm64.deb (--install):
new libspinnaker package pre-installation script subprocess returned error exit status 127
ping: zone2.flir.net: No address associated with hostname
Errors were encountered while processing:
libspinnaker_2.5.0.80_arm64.deb
I though it is a nework issue inside the container but I do have internet connection, which I checked through:
ping www.google.com
Does anybody has a suggestion why I am not able to install the spinnaker SDK inside my docker container? Or has an explanation for me, what "no address associated with hostname" means? I am thankfull for every hint in any direction. Maybe it is an issue because I moved my docker data folder to an external SD card?
I configured deis workflow in aws eks cluster. after that created deis apps and deployed in deis local repository by,
git push test test:master
when deploying, docker file is executed. here is my docker file
FROM mhart/alpine-node:12
#FROM ubuntu:18.04
ARG SOURCE_VERSION=na
ENV SOURCE_VERSION=$SOURCE_VERSION
RUN apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/v3.9 --update bash && rm -rf /var/cache/apk/*
#apt-get update &&\
#apt-get install -y make gcc wget
WORKDIR /app
ADD . .
RUN npm install
EXPOSE 3200
CMD ["node", "app.js"]
this results error like,
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.9/main: temporary error (try again later)
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.9/main: No such file or directory
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.9/community: temporary error (try again later)
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.9/community: No such file or directory
ERROR: unable to select packages:
bash (no such package):
required by: world[bash]
The command '/bin/sh -c apk add --update bash && rm -rf /var/cache/apk/*' returned a non-zero code: 1
remote: 2021-11-15 13:30:22.569253 I | Error running git receive hook [Build pod exited with code 1, stopping build]
To ssh://deis-builder.app-test.paceup.io:2222/pu-api-gateway.git
! [remote rejected] test -> master (pre-receive hook declined)
error: failed to push some refs to 'ssh://git#deis-builder.app-test.paceup.io:2222/pu-api-gateway.git'
I am totally new to docker, deis and eks. if anyone can help it would be grateful
Finally found the answer is that we have configured nodegroup setup in amazon linux which didn't support this deployment. we changed the nodegroup for eks optimized ubuntu and deployed the app using docker and working fine.
Edit:
This is working in some of the Linux versions. In my case it's working on EKS version 1.9 but not working in EKS version 2.0 and above.
This error may come due to DNS issue also while building the docker image pus the dns flag and mention google dn 8.8.8.8. Or edit the resolv.conf and add the nameserver 8.8.8.8 in the container
I hope this may help
I had this problem when my machine had many symptoms of a network configuration problem:
A Dockerfile that had to download zip files from the net could not do this anymore and threw the warning in question which stopped the build. I could download the zip files when entering the URL:s in the browser instead, it was a problem of the container. I checked the same Dockerfile on another healthy machine and the build ran through.
I had lost the connection to the internal dns server. I could not ping another machine by its name anymore, but had to use its internal IP, although the day before, the ping had worked.
I could see any GCP project items only in Firefox incognito mode.
Answer insofar is: change the machine and test whether it does not work only on your machine. If that is true, the workaround is already done. As the next step, try to fix any other network problems, and it is likely that this will get rid of the warning.
UPDATE: The problem was a running container that gave my machine its own network. When I ran docker-compose down, the network worked again. When I removed the network from the docker-compose file, the download from inside the container worked again, the warning in question was gone.
Trying to run ngrok, I get the following warning:
WARN[04-19|17:54:51] failed to get home directory, using $HOME instead err="user: Current not implemented on linux/amd64" $HOME=/root
It occurs whether I try to start a tunnel or merely run ngrok help.
If I do try to start a tunnel (e.g.: ngrok http -host-header=rewrite bilingueanglais.local:80), I get an empty screen, instead of the usual tunnel information.
It used to work fine, I'm not sure what changed. If I remember right, I got the exact same error in the past, but things went back to normal on their own. I'd then assumed the service was down.
However, this time, ngrok is clearly up but the error remains.
Environment:
Running ngrok on ubuntu:16.04 inside of Docker.
ngrok is version 2.2.8 (the latest available version at the time of posting.)
$HOME is /root
I installed Docker this way inside of my Dockerfile:
RUN apt-get install -y unzip
ADD https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip /ngrok.zip
RUN set -x \
&& unzip -o /ngrok.zip -d /bin \
&& rm -f /ngrok.zip
I'm able to run ngrok on the same computer on OS X instead of Docker, but would like to get things working again for Docker.
I'm confused by the error message and also, to some extent, by the docs where it mentions $HOME. Is the issue with my path? What does ngrok expect?
Any help welcome.
I have frequently built docker container using centos 7 as base image. But now I am getting error when I run,
RUN yum update add \
bash \
&& rm -rfv /var/cache/apk/*
ERROR:
Loaded plugins: fastestmirror, ovl
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
Contact the upstream for the repository and get them to fix the problem.
Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
`subscription-manager repos --disable=<repoid>`
Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: base/7/x86_64 Could not retrieve
mirrorlist
http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container
error was 14: curl#6 - "Could not resolve host: mirrorlist.centos.org;
Name or service not known" The command '/bin/sh -c yum update add
bash && rm -rfv /var/cache/apk/*' returned a non-zero code: 1
I also saw few resolutions to use "dhclient" but this error happens when i do docker-compose build.
I ran into this problem attempting to run the same Dockerfile, which fetched several software packages using yum, on two different platforms; one macOS, the other an Ubuntu 16.04-based Linux OS (elementaryOS Loki), both using the official packages from docker.com.
My theory is that the Linux package is just more restrictive out of the box, security-wise, than the macOS one. Maybe this is configurable with some kind of /etc/something config file, but I don't have the expertise with Docker to say for sure. EDIT: See my comment below.
What I can say is there was no additional configuration required for me on macOS (10.11 El Capitan); just docker build . worked fine, and yum processes from the Dockerfile were able to reach all the remote repositories.
In the Ubuntu-derived Linux distro, however, it was necessary to use
docker build --network host .
followed by
docker run -it --network host <image> <command>
when I wanted to run a process inside that image which required internet access.
This may be the case for other Debian-derived systems as well.
There are, of course, security considerations which need to be taken into account when allowing a long-running Docker container to communicate through the host network adapter, unrestricted, and one would do well to review the appropriate documentation in that regard.
My assumption is that for some reason network behavior in docker varies based on distribution.
Try to use:
docker run -d --net mybridge centos
or
docker network create -d bridge mybridge
docker run -d --net mybridge centos
It should start working. Or just edit /etc/hosts and add mirror address
Name: mirrorlist.centos.org
Address: 67.219.148.138
root cause of the issue is, container proxy settings were wrong. Just corrected the proxy settings at the below location and worked.
/root/.docker/config.json