To build a certain image I need to create a tunnel and make docker use this tunnel as a socks5 proxy (to use the proxy for DNS too).
So now i've got several problems:
How to make docker use the proxy that is on the host?
How to make docker use the proxy to get the base image?
How to make docker use the proxy for the RUN instruction?
How to make docker use the proxy for the ADD instruction?
Since I spent all day researching this, here are the answers.
I will leave the partially incomplete/wrong/old answer below, since I set up a new system today and needed to figure out all of the questions again because some parts of the old answer didn't make sense anymore.
Using localhost:port does not work. Until this issue is resolved, you need to use the IP address of your docker0 network interface (172.17.0.1 in my case). If your host OS is linux, you can use localhost:port by passing additional --network=host parameter to docker build as mentioned in some other answer.
and 3. Just put this content (change IP and port if needed) into ~/.docker/config.json (notice that the protocol is socks5h)
{
"proxies":
{
"default":
{
"httpProxy": "socks5h://172.17.0.1:3128",
// or "httpProxy": "socks5h://localhost:3128", with --network=host
"httpsProxy": "socks5h://172.17.0.1:3128",
"noProxy": ""
}
}
}
It seems that the ADD command is executed with the (proxy) environment variables of the host, ignoring those in config.json. To make things more complicated, since the daemon is usually running with the root user, only root user's environment variables are picked up. Even more complicated because the host of course needs to use localhost as host for the proxy. And the cherry on top: the protocol needs to be socks5 (missing the h at the end) in this case for whatever reason.
In my case, since I switched to WSL2 and use docker within WSL2 (starting the dockerd docker daemon manually), I just export the needed environment variable before the call to dockerd:
#!/bin/bash
# Start Docker daemon automatically when logging in if not running.
RUNNING=`ps aux | grep dockerd | grep -v grep`
if [ -z "$RUNNING" ]; then
unset http_proxy https_proxy HTTP_PROXY HTTPS_PROXY no_proxy NO_PROXY
export http_proxy=socks5h://localhost:30000
sudo -E dockerd > /dev/null 2>&1 &
disown
fi
If you have the "regular" setup on a linux machine, you could use the old answer to 4., but beware the probably there you also need to use localhost.
Incomplete/wrong/old answer starting here
Using localhost:port does not work. Until this issue is resolved, you need to use the IP address of your docker0 network interface (172.17.0.1 in my case).
This answer applies to question 3 too. Just put this content (change IP and port if needed) into ~/.docker/config.json (notice that the protocol is socks5h)
{
"proxies":
{
"default":
{
"httpProxy": "socks5h://172.17.0.1:3128",
"httpsProxy": "socks5h://172.17.0.1:3128",
"noProxy": ""
}
}
}
I do not know why (edit: now I know, it's because dockerd is running as root and it does not pick up proxy environment variables from the regular user), but for the ADD instruction the former settings to not apply (names do not get resolved through proxy). We need to put this content into /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=socks5://172.17.0.1:3128/"
then
sudo systemctl daemon-reload
sudo systemctl restart docker
(This is just wrong/unneeded with answer 2.)Also, for package managers like yum to be able to update the packages during build, you need to pass the environment variable like this:
docker build --build-arg http_proxy=socks5://172.17.0.1:3128
Using localhost:port works by adding "--network=host" option in "docker build ..." command.
In order to connect a container to socks5 locally, that is, the entire Internet goes to a proxy, the container must have access to the machine host.
To access machine hosting within Linux you must:
put --network="host" in run command:
docker run --name test --network="host" --env http_proxy="socks5://127.0.0.1:1080" --env https_proxy="127.0.0.1:1080" nginx sh -c "curl ifconfig.io"
for mac and windows users we using host.local.internal:local_port:
docker run --name test --env http_proxy="socks5://host.local.internal:1080" --env https_proxy="socks5://host.local.internal:1080" nginx sh -c "curl ifconfig.io"
sudo iptables -t nat -N REDSOCKS
sudo iptables -t nat -A REDSOCKS -d 0.0.0.0/8 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 10.0.0.0/8 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 127.0.0.0/8 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 169.254.0.0/16 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 172.16.0.0/12 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 224.0.0.0/4 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 240.0.0.0/4 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 192.168.0.0/16 -j RETURN
sudo iptables -t nat -A REDSOCKS -p tcp -j REDIRECT --to-ports 5000
sudo iptables -t nat -A REDSOCKS -d 172.17.0.0/12 -j RETURN
sudo iptables -t nat -A OUTPUT -p tcp -o docker0 -j REDSOCKS
sudo iptables -t nat -A PREROUTING -p tcp -i docker0 -j REDSOCKS
I have a fairly plain Debian Buster install. Debian Buster uses nftables rather than iptables. If I try and run a container with a published port:
sudo docker run -it --rm --name=port-test -p 1234:1234/tcp debian:jessie-slim
then I get this error:
docker: Error response from daemon: driver failed programming external connectivity on endpoint port-test (941052b9f420df39ac3d191dcbe12e97276703903911e7b5172663e7736d59e0): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 1234 -j DNAT --to-destination 172.17.0.2:1234 ! -i docker0: iptables v1.8.2 (nf_tables): Chain 'DOCKER' does not exist
How do I get port publishing working?
Please see this issue and comment:
https://github.com/moby/moby/issues/26824#issuecomment-517046804
You can run sudo update-alternatives --config iptables and sudo update-alternatives --config ip6tables (if you use IPv6), and set it to iptables-legacy which is a compatibility mode that Docker can work with.
I have a CentOS server with two static IP address (192.168.3.100 and 192.168.3.101) on same NIC and two containers running on it with port mapping as below. The containers use the same default 'bridge' network of docker
192.168.3.100:80->80/tcp container1
192.168.3.101:80->80/tcp container2
From the host, I can execute curl 192.168.3.100 or curl 192.168.3.101 and works fine. From the host/containers I can execute curl 172.17.0.2 or curl 172.17.0.3 and works fine.
But I cannot execute curl 192.168.3.100 or curl 192.168.3.101 from neither of these containers. Ends up with error No route to host. I can ping it though.
What am I missing here? I want to try to avoid using a 192 docker network as I do not want to tie up the address space with one machine. Using docker 1.12.6
Output for iptables reject rules iptables -S | grep -i reject
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
Thanks for your input
If you are able to ping and not able to curl and you get no route to host then it mostly means that your packets are getting rejected through firewall.
Check the iptables using sudo iptables -S or sudo iptables -L -n. If you see a REJECT or REJECT using icmp rule than thats the problem.
If you are not worried about iptables and are ok to clear it. Stop the docker service and run the below
$ iptables -F
$ iptables -X
$ iptables -t nat -F
$ iptables -t nat -X
$ iptables -t mangle -F
$ iptables -t mangle -X
This will clear all the tables. Then start the docker service and run the container again
When I try to start docker consul by this command
docker run --restart=unless-stopped -d -p 8500:8500 -h consul progrium/consul -server -bootstrap
it gives the following error.
docker: Error response from daemon: driver failed programming external
connectivity on endpoint tiny_bhaskara
(b53c9aa988d96750bfff7c19c6717b18756c7b92287f0f7a4d9e9fa81f42c43d):
iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0
--dport 8500 -j DNAT --to-destination 172.17.0.2:8500 ! -i docker0: iptables: No chain/target/match by that name.
No idea what's going on!!
From this answer:
Something on your system has removed the docker iptables entries that it needs to work. Two fixes have been suggested here:
For CentOS:
sudo service docker restart
sudo service iptables save
And for Ubuntu:
sudo apt-get install iptables-persistent
sudo service docker restart
iptables-save > /etc/iptables/rules.v4 # you may need to "sudo -s" to get a root shell first
After the restart of docker, you should see the docker chain under the nat table:
iptables -t nat -vL
I am trying to install Docker image of Restcomm on my windows 8.1 laptop by following http://www.telestax.com/rapid-webrtc-application-development-with-restcomm-and-docker/.
I am able to install DOCKER and run the container Hello-world properly.
But when i run the command to create container... "docker run –name=restcomm -d -e STATIC_ADDRESS=”YOUR_HOST_IP_ADDRESS_HERE” -p 8080:8080 -p 5080:5080 -p 5082:5082 -p 5080:5080/udp -p 65000-65535:65000-65535/udp gvagenas/restcomm"
i am getting the following error...
Error response from daemon: Cannot start container c88fcab56034096e98ddcd71d1d2db17e5858b88c64b1859efcb86d740e3972c: failed to create endpoint restcomm on network bridge: iptables failed: iptables --wait -t nat -A DOCKER -p udp -d 0/0 --dport 65116 -j DNAT --to-destination 172.17.0.2:65116 ! -i docker0: (fork/exec /usr/local/sbin/iptables: cannot allocate memory)
Request for your help and suggestion and thanks in advance
Rgds
Ias
This doesn't seem directly related to RestComm Docker image but to docker itself https://github.com/docker/docker/issues/8539 and https://github.com/docker/docker/issues/9139