Docker: access to VPN domain from docker - docker

There is some websource "http://vpnaccessible.com" where I need to download some RPM package via wget. And this web-source is accessible only from VPN. So I'm using Cisco AnyConnect VPN client to enter VPN, then I want to build image using Dockerfile where this wget command is listed.
The problem is: Docker can't access to that domain within container. So I tried to pass dns options in /etc/docker/daemon.json, but not sure what DNS IP I should pass, because in my local there are default DNS 192.168.0.1, 8.8.8.8. I tried to pass in that array IP addresses of docker0 interface, e.g. 172.17.0.1 -- didn't work.
$ cat /etc/docker/daemon.json
{
"insecure-registry": "http://my-insecure-registry.com",
"dns": ["192.168.0.1", "172.17.0.1", "8.8.8.8"]
}
I also tried to add this websource to /etc/resolf.conf but when I run docker to build image -- it's edited to the prev state (changes are not persisted there), and I guess, it's my Cisco VPN client behavior -- didn't work.
Also tried to add IP address of interface created by Cisco VPN client to that dns -- didn't work
I also commented out dns=dnsmasq in /etc/NetworkManager/NetworkManager.conf -- didnt work
For sure, I'm restarting docker and NetworkManager services after these changes.
Question: Should I create some bridge between Docker container and my VPN? How to solve this issue?

You can try using your host network instead of the default bridge one. Just add the following argument:
--network host
or
--net host
Depending of your docker version.

Related

How to alias a DNS name to hosts.docker.internal inside a docker container?

TL;DR: how do I get a client in my container to make an HTTPS connection to a service on the host?
I've got a service running on a VM on my local dev machine (macOS) that's serving HTTPS on port 8443; it's got a certificate for dev.mycoolproject.com and dev.mycoolproject.com has an A record pointing to 127.0.0.1. So, if I run my client on my local machine and point it to https://dev.mycoolproject.com:8443 it makes a secure connection to my local service.
I want to run my client inside a docker container and still have it connect to that local server on the host. But obviously dev.mycoolproject.com pointing at 127.0.0.1 won't work, and I can't just use /etc/hosts to redirect it because the host's IP is dynamic. I can reach the local server at host.docker.internal:8443, but I'll get TLS errors because the hostname doesn't match.
Is there any way I can get docker's DNS to map dev.mycoolproject.com to the host IP? I looked into running dnsmasq locally in the container but I had trouble getting it to work.
In a container where you might not have access to tools like dig or nslookup and don't want to install another 55MB package (like debian's dnsutils) just to get the host.docker.internal IP it might be better to use getent instead of dig:
getent hosts host.docker.internal | awk '{ print $1 }'
I ran into a similar issue yesterday and came up with a workaround that adds an entry to /etc/hosts resolving to the the host IP.
You'll need dig or another DNS tool to query for the IP.
If you are running as root you can use:
echo "$(dig +short host.docker.internal) dev.mycoolproject.com" >> /etc/hosts
If you have sudo you can run:
echo "$(dig +short host.docker.internal) dev.mycoolproject.com" | sudo tee -a /etc/hosts
Initially I was hoping the --add-host run option would allow for special docker entries in the host ip argument (like host.docker.internal) but unfortunately they don't.
I wanted to avoid more container configuration so I went with this. Setting up dnsmasq would be a more stable solution.

On Docker for Windows, how do you push to a registry being port forwarded to localhost on the host machine?

I'm just going to put this here, because it was very difficult to find information on this topic and I ended up solving it myself.
Setup
Bastion host in aws with a public ip address
Registry (image registry:2) on a private subnet behind bastion host
Successful ssh port forwarding through bastion, connecting localhost:5000 to registry:5000
curl localhost:5000/v2/_catalog provides a list of installed registries.
So far so good.
docker tag {my image} localhost:5000/{my image}
docker push localhost:5000/{my image}
Result
The push refers to repository [localhost:5000/{my image}]
Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: connect: connection refused
How do we connect to a registry port forwarded to localhost?
I have found some obscure posts suggesting that you need to make a custom privileged container and do your ssh bastion port forwarding inside the container. This is essentially working around a problem introduced by the fact that the docker daemon is actually running inside a virtual machine!
https://docs.docker.com/docker-for-windows/networking/
You can find a hint here:
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST The host
has a changing IP address (or none if you have no network access).
From 18.03 onwards our recommendation is to connect to the special DNS
name host.docker.internal, which resolves to the internal IP address
used by the host. This is for development purpose and will not work in
a production environment outside of Docker Desktop for Windows.
So given the above, I reasoned that even though this advice is for containers, the docker daemon itself is probably acting on docker cli commands from within a similar context.
Therefore, first you need to add host.docker.internal:5000 as an insecure registry in your docker daemon setup. On Docker for Windows, this can be found in Settings > Daemon > Insecure registries. Unfortunately this doesn't count as localhost, so this has to be done (Docker allows localhost insecure registries by default). Then simply:
docker tag {my image} host.docker.internal:5000/{my image}
docker push host.docker.internal:5000/{my image}
Success!
Hopefully this helps some other very confused developers.

How to set docker to always use specific host IP or interface

I have linux machine, with docker installed, that works also as NAT router. It has multiple interfaces and I need docker to communicate by default with only one of them. After hours of trying custom networks, the best solution I found is to set the interface IP when specifying port mappings:
docker run -p 192.168.0.1:80:80 -d nginx
Where 192.168.0.1 is my interface IP. Is it possible to set docker to use that IP (interface) every time? E.g. when I download someone's docker-compose.yml and use it without changes.
you can add the "ip" option to /etc/docker/daemon.json:
{
[...]
"ip":"192.168.0.1"
}
After restarting the service, ports will be exposed on this interface instead of default 0.0.0.0 one.
afaik, the daemon.json file can accept any options as defined on dockerd itself: https://docs.docker.com/engine/reference/commandline/dockerd/

How to provide internet access to running docker container?

Hi can any one help me to tell the correct command to provide internet access to a running container ?
I know we have to specify --net in docker run command to access internet from inside container.
What if I want to provide internet access to container which I didn't ran with --net command (i.e to container which does not have internet access)
I got docker network connect NetworkName ContainerName/ID command from: https://docs.docker.com/engine/reference/commandline/network_connect/
but running above command does not providing internet access so requesting to share me correct command.
Note: Am trying this on centos container
Your docker containers should have internet access by default as that is the normal setup of docker, and by no means should they require providing --net to get that. If they don't then you probably have something mixed up on your host like ie. additional firewall rules or lack of ip forwarding enabled.
For starters, check if you have enabled ip forwarding, should look like following :
$ cat /proc/sys/net/ipv4/ip_forward
1
and verify if you don't have something funky in your iptables
Docker containers should resolve internet traffic once you configured properly. Please check the container network status by,
Enter public DNS (8.8.8.8)manually in /etc/resolve.conf.
If not working check the container network side.
#goto /etc/default/docker
#add public DNS values there (DOCKER_OPTS="--dns 208.67.222.222 --dns 208.67.220.220")
#sudo service docker restart
Login to the container and ping google.com

Access docker remote API from container

I'm trying to access Docker remote API from within a container because I need to start other containers.
The host address is 172.19.0.1, so I'm using http://172.19.0.1:2375/images/json to get the list of images (from host, http://localhost:2375/images/json works as expected.
The connection is refused, I guess because Docker (for Windows) listens on 127.0.0.1 and not on 0.0.0.0.
I've tried to change configuration (both from UI and daemon.json) adding the entry:
"hosts": ["tcp://0.0.0.0:2375"]
but the daemon fails to start. How can I access the api?
You can set DOCKER_OPTS in windows as below and try. In Windows, Docker runs inside a VM. So, you have to ssh into the VM and make the changes.
DOCKER_OPTS='-H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock'
Check if it works for you.
Update :- To ssh into the VM (assuming default is the VM name you have created using Docker toolbox), enter the following command in the Docker Quickstart Terminal,
docker-machine ssh default
You can find more details here.
You could link the host's /var/run/docker.sock within the container where you need it. This way, you don't expose the Docker Remote API via an open port.
Be aware that it does provide root-like access to docker.
-v /var/run/docker.sock:/var/run/docker.sock
You should use "tcp://host.docker.internal:2375" to connect to host machine from container. Please make sure that you can ping the "host.docker.internal" address
https://github.com/docker/for-win/issues/1976

Resources