Docker build error: "could not connect to server" (behind proxy) - docker

Context:
OS: Windows 10 Pro; Docker ver: 18.09.0 (build 4d60db4); Behind corporate proxy, using CNTLM to solve this issue. (currently pulling / running image works fine)
Problem:
I was trying to build the following Dockerfile:
FROM alpine:3.5
RUN apk add --update \
python3
RUN pip3 install bottle
EXPOSE 8000
COPY main.py /main.py
CMD python3 /main.py
This is what I got:
Sending build context to Docker daemon 11.26kB
Step 1/6 : FROM alpine:3.5
---> dc496f71dbb5
Step 2/6 : RUN apk add --update python3
---> Running in 7f5099b20192
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.5/main: could not connect to server (check repositories file)
WARNING: Ignoring APKINDEX.c51f8f92.tar.gz: No such file or directory
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.5/community: could not connect to server (check repositories file)
WARNING: Ignoring APKINDEX.d09172fd.tar.gz: No such file or directory
ERROR: unsatisfiable constraints:
python3 (missing):
required by: world[python3]
The command '/bin/sh -c apk add --update python3' returned a non-zero code: 1
I was able to access the URL from a browser, so there is no problem with the server itself.
I suspected that it has something to do with the proxy not being propagated to the container, as explained in this question, since I also did not get the http_proxy line when running docker run alpine env. However, after entering the proxies into the config file, it finally appeared. Yet the problem still exists.
I also tried to change the DNS as instructed here, but the problem is still unsolved.

I finally managed to solve this problem, and the culprit was my setting in the CNTLM.
For a background story, please check this post.
The root cause of this problem is that the docker container could not access the internet from inside the VM due to wrong IP setting inside the CNTLM.ini.
Normally CNTLM listens to 127.0.0.1:3128 by default to forward the proxy. I followed the default, and thus set the proxy setting on Docker (for the daemon - through the GUI, and for the container - through config.json) is also set into that address and port. It turns out that this "localhost" does not apply to the VM where docker sits, since the VM has its own localhost. Long story short, the solution is to change that address into dockerNAT IP address (10.0.75.1:3128) in all of the following locations:
CNTLM.ini (on the Listen line. Actually if we use CNTLM for other purposes as well, it is possible to supply more than one Listen line)
Docker daemon's proxy (through the Docker setting GUI)
Docker container config.json (usually in C:\Users\<username>\.docker), by adding the following lines:
"proxies":
{
"default":
{
"httpProxy": "http://10.0.75.1:3128",
"httpsProxy": "http://10.0.75.1:3128",
"noProxy": <your no_proxy>
}
}
also check these related posts:
Building a docker image for a node.js app fails behind proxy
Docker client ignores HTTP_PROXY envar and build args
Beginner having trouble with docker behind company proxy

You can try to build your docker file with the following command:
docker build --build-arg http_proxy=http://your.proxy:8080 --build-arg http_proxy=http://your.proxy:8080 -t yourimage .

Related

Docker Build CMD fail yum not able to install the requirements

My docker build cmd is failing to create an image using Dockerfile. It shows this error
here is the screenshot of the error
Check if you can access the site on the host machine.
Check your docker networking, for a docker VM, it is usually a bridge network by default.
Check if you need to add the repository to YUM.

How to connect to a docker container from host(windows machine)

i am literally new to Docker. I have a java application which I can execute by using javaws command as below.
javaws http://localhost:9088/rtccClient/rtcc.jnlp.
I have created docker container for this application in my window's machine using "ibmcom/websphere-liberty:latest" as base image. after starting the container I am executing the same command to run the application and it says "CouldNotLoadArgumentException[ Could not load file/URL specified: http://localhost:9088/rtccClient/rtcc.jnlp]".
Below is my docker file . please update what I am doing wrong.
**FROM ibmcom/websphere-liberty:latest
USER root
ADD ./rtcc.ear /opt/ibm/wlp/usr/servers/defaultServer/apps
ADD ./rtccClient.war /opt/ibm/wlp/usr/servers/defaultServer/apps
RUN yum -y install unixODBC
RUN yum -y install libaio
RUN mkdir -pv /basic
COPY ./basicinstaclient/oracle-instantclient19.8-basic-19.8.0.0.0- 1.x86_64.rpm /basic/
RUN rpm -i /basic/oracle-instantclient19.8-basic-19.8.0.0.0-1.x86_64.rpm
EXPOSE 9088
EXPOSE 9450**
when I inspect the docker container id the ip showed as "172.18.0.3" and port of the container was 9080. In jnlp file which I mention in the javaws command I am supposed to use the ip and port. do I need to put ip and port of the container?
so I used "javaws http://172.18.0.3:9080/rtccClient/rtcc.jnlp". still it didn't work. I even replaced with my windows machine IP. I even logged into container to execute javaws command. it says javaws not found. Please help
Try a command like this:
docker run -p 9080:9080 YOUR_IMAGE_NAME_HERE
then try the javaws http://localhost:9088/rtccClient/rtcc.jnlp again
the -p will map ports like this: host:docker from left to right map the host machine port to the docker internal port.
Here you can also find a nice docker FROM scratch workshop (shameless plug) : https://docker-from-scratch.ivonet.nl/
I tried as you said and this is what i am getting. CouldNotLoadArgumentException[ Could not load file/URL specified: http://localhost:9088/rtccClient/rtcc.jnlp]
More over you are mapping 9080:9080 in run command and using 9088 in javaws command. so how it will work?

How to access web service running in another docker container from within dockerfile?

I have a web service in docker container running and accessible at localhost:5000 (host machine). I would like to call this service in Dockerfile. I am using a multi-stage Dockerfile for .net core app. I would like to collect build stats and send to service running at localhost:5000. So far, suggested approach was to use --network="host". But this doesn't work during build time.
How strange! --network=host should achieve the desired effect, in line with the suggested approach that you mention. I wonder why this doesn't work for you out of the box.
I suggest you test this simplified version of the problem to eliminate potential sources of error outside the Docker networking stack:
Run a dummy web service: docker run -d -p 5000:80 nginx will do the trick
Create this Dockerfile:
FROM busybox
RUN echo localhost:5000
RUN wget localhost:5000
Run docker build --no-cache .. As expected, the network call will fail and the output should include:
wget: can't connect to remote host (127.0.0.1): Connection refused
Run docker build --network=host --no-cache .. For me, the output includes:
Connecting to localhost:5000 (127.0.0.1:5000)
index.html 100% |********************************| 612 0:00:00 ETA
Run docker build --network=bridge --no-cache .. Again, the network call fails with:
wget: can't connect to remote host (127.0.0.1): Connection refused
What do you get when you try this?
Important note in the example above: I include the --no-cache parameter in the docker build commands to ensure the entire Dockerfile is rebuilt. Otherwise docker will just pull the built image layers from cache and ignore any changes you made to the --network parameter since you last built this Dockerfile.
More importantly, though the network option is exposed at image build time (as of a recent API version), I'm not sure this use would be consistent with best practices. You may have better luck with a solution that transmits build stats outside the actual Dockerfile. For example, a CI script that runs docker build ... could collect the output from that command, grep/sed it for specific stats, and transmit those to your web service.
Edit: how to achieve this with docker-compose, per your comment. Sadly docker-compose docs don't advertise this, but --network is actually supported. It was added in Compose YAML file format version 3.4, specifically in this PR.
Here's an example docker-compose.yml:
version: '3.4'
services:
web:
build:
context: .
network: host
Put that in the same directory as the example Dockerfile from above, and make sure nginx in a separate container as demonstrated above.
Here's the result:
> docker-compose build --no-cache
Building web
Step 1/3 : FROM busybox
---> e1ddd7948a1c
Step 2/3 : RUN echo localhost:5000
---> Running in d5f0d712c188
localhost:5000
Removing intermediate container d5f0d712c188
---> 8aa9d974447f
Step 3/3 : RUN wget localhost:5000
---> Running in a10cee732e48
Connecting to localhost:5000 (127.0.0.1:5000)
index.html 100% |********************************| 612 0:00:00 ETA
Removing intermediate container a10cee732e48
---> 8a287d116b4b
Successfully built 8a287d116b4b
Successfully tagged dockerquestion_web:latest

How to access parent host service while building docker image?

while building a docker image I would like to access a service hosted on the parent host. For instance, suppose I need to access a npm private repository that's running on the host machine xpto:8080. On xpto I'm also building a new image that on Dockerfile calls
RUN npm set registry http://xpto:8080
RUN npm install
When I try to docker build -t=my_image . I always get
failed, reason: connect EHOSTUNREACH 192.168.2.103:4873
Also tried RUN wget xpto:8080 and got
failed: No route to host.
Tried to use the --add-host parameter but it didn't workout.
The strange part is that when I try to access the parent host service from another container it runs ok, but had to add the --net="host" parameter like this:
docker run --it --rm --net="host" my-test-image sh
wget xpto:8080
The thing is that this --net parameter isn't supported by docker build!
Thanks,
For some, unknown reason, the centosOS firewall was only blocking the connection while building and not while running.
The solution was to add an exception to the firewall and problem was solved.

Add xserver into Docker container (the host is headless)

I'm building a Docker container which have maven and some dependencies. Then it execute a script inside the container. It seems, one of that dependencies needs an Xserver to work. Nothing is shown on screen but it seems necessary and can't be avoided.
I got it working putting an ENV DISPLAY=x.x.x.x:0 on Dockerfile and it connects to the external Xserver and it works. But the point is to make a Docker self-sufficient container.
So I need to add a Xserver to my container adding in Dockerfile the necessary. And I want that Xserver only accessible by the Docker container itself and not externally.
The FROM of my Dockerfile is FROM ubuntu:15.04 and that is unchangeable because my Dockerfile have a lot of things depending of that specific version.
I've read some post about how to connect from docker container to Xserver of the Docker host machine, like this. But as I put in question's title, the Docker host is headless and doesn't have Xserver.
Which would be the minimum apt-get packages to install into the container to have a Xserver?
I guess in my Dockerfile will be needed the display environment var like ENV DISPLAY=:0. Is this correct?
Is anything else needed to be added in docker run command?
Thank you.
You can install and run a x11vnc inside your docker container. I'll show you how to make it running on a headless host and connect it remotely to run X applications(e.g. xterm).
Dockerfile:
FROM joprovost/docker-x11vnc
RUN mkdir ~/.vnc && touch ~/.vnc/passwd
RUN x11vnc -storepasswd "vncdocker" ~/.vnc/passwd
EXPOSE 5900
CMD ["/usr/bin/x11vnc", "-forever", "-usepw", "-create"]
And build a docker image named vnc:
docker build -t vnc .
Run a container and remember map port 5900 to host for remote connect(I'm using --net=host here):
docker run -d --name=vnc --net=host vnc
Now you have a running container with x11vnc inside, download a vnc client like realvnc and try to connect to <server_ip>:5900 from local, the password is vncdocker which is set in Dockerfile, you'll come to the remote X screen with an xterm open. If you execute env and will find the environment variable DISPLAY=:20
Let's go to the docker container and try to open another xterm:
docker exec -it vnc bash
Then execute the following command inside container:
DISPLAY=:20 xterm
A new xterm window will popup in your vnc client window. I guess that's the way you are going to run your application.
Note:
The base vnc image is based on ubuntu 14, but I guess the package is similar in ubuntu 16
Don't expose 5900 if you don't want remote connection
Hope this can help :-)

Resources