while building a docker image I would like to access a service hosted on the parent host. For instance, suppose I need to access a npm private repository that's running on the host machine xpto:8080. On xpto I'm also building a new image that on Dockerfile calls
RUN npm set registry http://xpto:8080
RUN npm install
When I try to docker build -t=my_image . I always get
failed, reason: connect EHOSTUNREACH 192.168.2.103:4873
Also tried RUN wget xpto:8080 and got
failed: No route to host.
Tried to use the --add-host parameter but it didn't workout.
The strange part is that when I try to access the parent host service from another container it runs ok, but had to add the --net="host" parameter like this:
docker run --it --rm --net="host" my-test-image sh
wget xpto:8080
The thing is that this --net parameter isn't supported by docker build!
Thanks,
For some, unknown reason, the centosOS firewall was only blocking the connection while building and not while running.
The solution was to add an exception to the firewall and problem was solved.
Related
My docker build cmd is failing to create an image using Dockerfile. It shows this error
here is the screenshot of the error
Check if you can access the site on the host machine.
Check your docker networking, for a docker VM, it is usually a bridge network by default.
Check if you need to add the repository to YUM.
I created a task definition on Amazon ECS and want to run in with Fargate. I set up my task, network mode is awsvpc. I created a new container with a docker image (simple "Hello world" project) on Amazon ECR. Run the task - everything works fine. Now I need to run a docker container from hub.docker.com as a part of the task
Dockerfile
FROM ubuntu
RUN apt-get update && apt-install ...
ADD script.sh /script.sh
RUN chmod +x /script.sh
ENTRYPOINT ["/script.sh"]
script.sh
#!/bin/bash
...prepare data
docker run -rm some_container_from_docker_hub
...continue process data
Initially, I got "command not found" error. OK, I installed docker into my image. Now I've got "Cannot connect to the Docker daemon".
My question: is there any way to run a docker container inside of another docker container on Amazon Fargate?
You can't run a container from another container using Fargate.
Running a container from another one, like in your case, would mean that you could have access to the docker daemon. Accessing the docker daemon means root access to the host machine. This breaks the docker container isolation and is unsafe.
Depending on your usage, I suggest you use an EC2 instance, use CodeBuild or build an operator that is able to talk with the api to span containers.
[Edit]: It seems that there is an open issue on this topic [ECS,Fargate]: Support for building Docker containers #95
I have a master container instance (Node.js) that runs some tasks in a temporary worker docker container.
The base image used is node:8-alpine and the entrypoint command executes with user node (non-root user).
I tried running my container with the following command:
docker run \
-v /tmp/box:/tmp/box \
-v /var/run/docker.sock:/var/run/docker.sock \
ifaisalalam/ide-taskmaster
But when the nodejs app tries running a docker container, permission denied error is thrown - the app can't read /var/run/docker.sock file.
Accessing this container through sh and running ls -lha /var/run/docker.sh, I see that the file is owned by root:412. That's why my node user can't run docker container.
The /var/run/docker.sh file on host machine is owned by root:docker, so I guess the 412 inside the container is the docker group ID of the host machine.
I'd be glad if someone could provide me an workaround to run docker from docker container in Container-optimized OS on GCE.
The source Git repository link of the image I'm trying to run is - https://github.com/ifaisalalam/ide-taskmaster
Adding the following command into my start-up script of the host machine solves the problem:
sudo chmod 666 /var/run/docker.sock
I am just not sure if this would be a secure workaround for an app running in production.
EDIT:
This answer suggests another approach that might also work - https://stackoverflow.com/a/47272481/11826776
Also, you may read this article - https://denibertovic.com/posts/handling-permissions-with-docker-volumes/
Context:
OS: Windows 10 Pro; Docker ver: 18.09.0 (build 4d60db4); Behind corporate proxy, using CNTLM to solve this issue. (currently pulling / running image works fine)
Problem:
I was trying to build the following Dockerfile:
FROM alpine:3.5
RUN apk add --update \
python3
RUN pip3 install bottle
EXPOSE 8000
COPY main.py /main.py
CMD python3 /main.py
This is what I got:
Sending build context to Docker daemon 11.26kB
Step 1/6 : FROM alpine:3.5
---> dc496f71dbb5
Step 2/6 : RUN apk add --update python3
---> Running in 7f5099b20192
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.5/main: could not connect to server (check repositories file)
WARNING: Ignoring APKINDEX.c51f8f92.tar.gz: No such file or directory
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.5/community: could not connect to server (check repositories file)
WARNING: Ignoring APKINDEX.d09172fd.tar.gz: No such file or directory
ERROR: unsatisfiable constraints:
python3 (missing):
required by: world[python3]
The command '/bin/sh -c apk add --update python3' returned a non-zero code: 1
I was able to access the URL from a browser, so there is no problem with the server itself.
I suspected that it has something to do with the proxy not being propagated to the container, as explained in this question, since I also did not get the http_proxy line when running docker run alpine env. However, after entering the proxies into the config file, it finally appeared. Yet the problem still exists.
I also tried to change the DNS as instructed here, but the problem is still unsolved.
I finally managed to solve this problem, and the culprit was my setting in the CNTLM.
For a background story, please check this post.
The root cause of this problem is that the docker container could not access the internet from inside the VM due to wrong IP setting inside the CNTLM.ini.
Normally CNTLM listens to 127.0.0.1:3128 by default to forward the proxy. I followed the default, and thus set the proxy setting on Docker (for the daemon - through the GUI, and for the container - through config.json) is also set into that address and port. It turns out that this "localhost" does not apply to the VM where docker sits, since the VM has its own localhost. Long story short, the solution is to change that address into dockerNAT IP address (10.0.75.1:3128) in all of the following locations:
CNTLM.ini (on the Listen line. Actually if we use CNTLM for other purposes as well, it is possible to supply more than one Listen line)
Docker daemon's proxy (through the Docker setting GUI)
Docker container config.json (usually in C:\Users\<username>\.docker), by adding the following lines:
"proxies":
{
"default":
{
"httpProxy": "http://10.0.75.1:3128",
"httpsProxy": "http://10.0.75.1:3128",
"noProxy": <your no_proxy>
}
}
also check these related posts:
Building a docker image for a node.js app fails behind proxy
Docker client ignores HTTP_PROXY envar and build args
Beginner having trouble with docker behind company proxy
You can try to build your docker file with the following command:
docker build --build-arg http_proxy=http://your.proxy:8080 --build-arg http_proxy=http://your.proxy:8080 -t yourimage .
All,
I am using DCOS and the associated Jenkins.
My company is having a proxy for any external traffic.
Jenkins is running properly and can access the internal network as well as any external network.
I can get jobs to curl a URL on internet if I set the HTTP proxy. I can pass this proxy to mesosphere/jenkins-dind:0.3.1 container as environment variable however, I can't run any docker pull or docker run while being in docker in docker mode.
I managed to reproduce the issue on one of the agent box.
sudo docker run hello-world
Hello from Docker!
This works!!
However, sudo docker run --privileged mesosphere/jenkins-dind:0.3.1 wrapper.sh "docker run hello-world" will fail with
docker: Error while pulling image: Get https://index.docker.io/v1/repositories/library/hello-world/images: x509: certificate is valid for FG3K6C3A13800607, not index.docker.io.
This is typically showing that the docker daemon is not having access to the proxy.
Do you know how to ensure that the dind is getting access to the proxy settings?
Antoine
This error can also manifest itself if the Docker daemon is unauthenticated against your registry but it looks like you're running against the public image, so that's not likely to be the problem.
You could try creating a new Parameter to the Jenkins node (see the instructions here for an example for how to set an environment variable called DOCKER_EXTRA_OPTS: https://docs.mesosphere.com/1.8/usage/service-guides/jenkins/advanced-configuration/).
In this case, we want to do the same (with Name env) but with the contents of Value set to something like HTTP_PROXY=http://proxy.example.com:80/.