To build a certain image I need to create a tunnel and make docker use this tunnel as a socks5 proxy (to use the proxy for DNS too).
So now i've got several problems:
How to make docker use the proxy that is on the host?
How to make docker use the proxy to get the base image?
How to make docker use the proxy for the RUN instruction?
How to make docker use the proxy for the ADD instruction?
Since I spent all day researching this, here are the answers.
I will leave the partially incomplete/wrong/old answer below, since I set up a new system today and needed to figure out all of the questions again because some parts of the old answer didn't make sense anymore.
Using localhost:port does not work. Until this issue is resolved, you need to use the IP address of your docker0 network interface (172.17.0.1 in my case). If your host OS is linux, you can use localhost:port by passing additional --network=host parameter to docker build as mentioned in some other answer.
and 3. Just put this content (change IP and port if needed) into ~/.docker/config.json (notice that the protocol is socks5h)
{
"proxies":
{
"default":
{
"httpProxy": "socks5h://172.17.0.1:3128",
// or "httpProxy": "socks5h://localhost:3128", with --network=host
"httpsProxy": "socks5h://172.17.0.1:3128",
"noProxy": ""
}
}
}
It seems that the ADD command is executed with the (proxy) environment variables of the host, ignoring those in config.json. To make things more complicated, since the daemon is usually running with the root user, only root user's environment variables are picked up. Even more complicated because the host of course needs to use localhost as host for the proxy. And the cherry on top: the protocol needs to be socks5 (missing the h at the end) in this case for whatever reason.
In my case, since I switched to WSL2 and use docker within WSL2 (starting the dockerd docker daemon manually), I just export the needed environment variable before the call to dockerd:
#!/bin/bash
# Start Docker daemon automatically when logging in if not running.
RUNNING=`ps aux | grep dockerd | grep -v grep`
if [ -z "$RUNNING" ]; then
unset http_proxy https_proxy HTTP_PROXY HTTPS_PROXY no_proxy NO_PROXY
export http_proxy=socks5h://localhost:30000
sudo -E dockerd > /dev/null 2>&1 &
disown
fi
If you have the "regular" setup on a linux machine, you could use the old answer to 4., but beware the probably there you also need to use localhost.
Incomplete/wrong/old answer starting here
Using localhost:port does not work. Until this issue is resolved, you need to use the IP address of your docker0 network interface (172.17.0.1 in my case).
This answer applies to question 3 too. Just put this content (change IP and port if needed) into ~/.docker/config.json (notice that the protocol is socks5h)
{
"proxies":
{
"default":
{
"httpProxy": "socks5h://172.17.0.1:3128",
"httpsProxy": "socks5h://172.17.0.1:3128",
"noProxy": ""
}
}
}
I do not know why (edit: now I know, it's because dockerd is running as root and it does not pick up proxy environment variables from the regular user), but for the ADD instruction the former settings to not apply (names do not get resolved through proxy). We need to put this content into /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=socks5://172.17.0.1:3128/"
then
sudo systemctl daemon-reload
sudo systemctl restart docker
(This is just wrong/unneeded with answer 2.)Also, for package managers like yum to be able to update the packages during build, you need to pass the environment variable like this:
docker build --build-arg http_proxy=socks5://172.17.0.1:3128
Using localhost:port works by adding "--network=host" option in "docker build ..." command.
In order to connect a container to socks5 locally, that is, the entire Internet goes to a proxy, the container must have access to the machine host.
To access machine hosting within Linux you must:
put --network="host" in run command:
docker run --name test --network="host" --env http_proxy="socks5://127.0.0.1:1080" --env https_proxy="127.0.0.1:1080" nginx sh -c "curl ifconfig.io"
for mac and windows users we using host.local.internal:local_port:
docker run --name test --env http_proxy="socks5://host.local.internal:1080" --env https_proxy="socks5://host.local.internal:1080" nginx sh -c "curl ifconfig.io"
sudo iptables -t nat -N REDSOCKS
sudo iptables -t nat -A REDSOCKS -d 0.0.0.0/8 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 10.0.0.0/8 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 127.0.0.0/8 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 169.254.0.0/16 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 172.16.0.0/12 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 224.0.0.0/4 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 240.0.0.0/4 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 192.168.0.0/16 -j RETURN
sudo iptables -t nat -A REDSOCKS -p tcp -j REDIRECT --to-ports 5000
sudo iptables -t nat -A REDSOCKS -d 172.17.0.0/12 -j RETURN
sudo iptables -t nat -A OUTPUT -p tcp -o docker0 -j REDSOCKS
sudo iptables -t nat -A PREROUTING -p tcp -i docker0 -j REDSOCKS
Related
I know it is possible to pass http_proxy and https_proxy environment variables to a container as shown in eg. this SO answer. However, this only works for proxy-aware commands like wget and curl as they merely read and use these environment variables.
I need to connect everything through the proxy, so that all internet access is routed via the proxy. Essentially, the proxy should be transformed into a kind of VPN.
I am thinking about something similar to the --net=container option where the container gets its network from another container.
How do I configure a container to run everything through the proxy?
Jan Garaj's comment actually pointed me in the right direction.
As noted in my question, not all programs and commands use the proxy environment variables so simply passing the http_proxy and https_proxy env vars to docker is not a solution. I needed a solution where the whole docker container is directing every network requests (on certain ports) through the proxy. No matter which program or command.
The Medium article demonstrates how to build and setup a docker container that, by the help of redsocks, will redirect all ftp requests to another running docker container acting as a proxy. The communication between the containers is done via a docker network.
In my case I already have a running proxy so I don't need a docker network and a docker proxy. Also, I need to proxy http and https, not ftp.
By changing the configuration files I got it working. In this example I simply call wget ipecho.net/plain to retrieve my outside IP. If it works, this should be the IP of the proxy, not my real IP.
Configuration
Dockerfile:
FROM debian:latest
LABEL maintainer="marlar"
WORKDIR /app
ADD . /app
RUN apt-get update
RUN apt-get upgrade -qy
RUN apt-get install iptables redsocks curl wget lynx -qy
COPY redsocks.conf /etc/redsocks.conf
ENTRYPOINT /bin/bash run.sh
setup script (run.sh):
#!/bin/bash
echo "Configuration:"
echo "PROXY_SERVER=$PROXY_SERVER"
echo "PROXY_PORT=$PROXY_PORT"
echo "Setting config variables"
sed -i "s/vPROXY-SERVER/$PROXY_SERVER/g" /etc/redsocks.conf
sed -i "s/vPROXY-PORT/$PROXY_PORT/g" /etc/redsocks.conf
echo "Restarting redsocks and redirecting traffic via iptables"
/etc/init.d/redsocks restart
iptables -t nat -A OUTPUT -p tcp --dport 80 -j REDIRECT --to-port 12345
iptables -t nat -A OUTPUT -p tcp --dport 443 -j REDIRECT --to-port 12345
echo "Getting IP ..."
wget -q -O- https://ipecho.net/plain
redsocks.conf:
base {
log_debug = off;
log_info = on;
log = "file:/var/log/redsocks.log";
daemon = on;
user = redsocks;
group = redsocks;
redirector = iptables;
}
redsocks {
local_ip = 127.0.0.1;
local_port = 12345;
ip = vPROXY-SERVER;
port = vPROXY-PORT;
type = http-connect;
}
Building the container
build -t proxy-via-iptables .
Running the container
docker run -i -t --privileged -e PROXY_SERVER=x.x.x.x -e PROXY_PORT=xxxx proxy-via-iptables
Replace the proxy server and port with the relevant numbers.
If the container works and uses the external proxy, wget should spit out the IP of the proxy even though the wget command does not use the -e use_proxy=yes option. If it doesn't work, it will give you your own IP. Or perhaps no IP at all, depending on how it fails.
You can use the proxy env var:
docker container run \
-e HTTP_PROXY=http://username:password#proxy2.domain.com \
-e HTTPS_PROXY=http://username:password#proxy2.domain.com \
yourimage
If you want the proxy-server to be automatically used when starting a container, you can configure default proxy-servers in the Docker CLI configuration file (~/.docker/config.json). You can find instructions for this in the networking section in the user guide
for exemple :
{
"proxies": {
"default": {
"httpProxy": "http://username:password#proxy2.domain.com",
"httpsProxy": "http://username:password#proxy2.domain.com"
}
}
}
To verify if the ~/.docker/config.json configuration is working, start a container and print its env:
docker container run --rm busybox env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=220e4df13604
HTTP_PROXY=http://username:password#proxy2.domain.com
http_proxy=http://username:password#proxy2.domain.com
HTTPS_PROXY=http://username:password#proxy2.domain.com
https_proxy=http://username:password#proxy2.domain.com
HOME=/root
I tried to run a httpd container with the following command:
docker container run -d -p 8080:80 httpd
but when I access localhost:8080, I always get this error ERR_CONNECTION_RESET
it seems some other process is using port 80.
you can reset ports by resetting IP tables.
stop docker service
reset IP tables
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -t nat -F
iptables -t mangle -F
iptables -F
iptables -X
reset Docker Service
I have a fairly plain Debian Buster install. Debian Buster uses nftables rather than iptables. If I try and run a container with a published port:
sudo docker run -it --rm --name=port-test -p 1234:1234/tcp debian:jessie-slim
then I get this error:
docker: Error response from daemon: driver failed programming external connectivity on endpoint port-test (941052b9f420df39ac3d191dcbe12e97276703903911e7b5172663e7736d59e0): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 1234 -j DNAT --to-destination 172.17.0.2:1234 ! -i docker0: iptables v1.8.2 (nf_tables): Chain 'DOCKER' does not exist
How do I get port publishing working?
Please see this issue and comment:
https://github.com/moby/moby/issues/26824#issuecomment-517046804
You can run sudo update-alternatives --config iptables and sudo update-alternatives --config ip6tables (if you use IPv6), and set it to iptables-legacy which is a compatibility mode that Docker can work with.
I have a CentOS server with two static IP address (192.168.3.100 and 192.168.3.101) on same NIC and two containers running on it with port mapping as below. The containers use the same default 'bridge' network of docker
192.168.3.100:80->80/tcp container1
192.168.3.101:80->80/tcp container2
From the host, I can execute curl 192.168.3.100 or curl 192.168.3.101 and works fine. From the host/containers I can execute curl 172.17.0.2 or curl 172.17.0.3 and works fine.
But I cannot execute curl 192.168.3.100 or curl 192.168.3.101 from neither of these containers. Ends up with error No route to host. I can ping it though.
What am I missing here? I want to try to avoid using a 192 docker network as I do not want to tie up the address space with one machine. Using docker 1.12.6
Output for iptables reject rules iptables -S | grep -i reject
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
Thanks for your input
If you are able to ping and not able to curl and you get no route to host then it mostly means that your packets are getting rejected through firewall.
Check the iptables using sudo iptables -S or sudo iptables -L -n. If you see a REJECT or REJECT using icmp rule than thats the problem.
If you are not worried about iptables and are ok to clear it. Stop the docker service and run the below
$ iptables -F
$ iptables -X
$ iptables -t nat -F
$ iptables -t nat -X
$ iptables -t mangle -F
$ iptables -t mangle -X
This will clear all the tables. Then start the docker service and run the container again
When I try to start docker consul by this command
docker run --restart=unless-stopped -d -p 8500:8500 -h consul progrium/consul -server -bootstrap
it gives the following error.
docker: Error response from daemon: driver failed programming external
connectivity on endpoint tiny_bhaskara
(b53c9aa988d96750bfff7c19c6717b18756c7b92287f0f7a4d9e9fa81f42c43d):
iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0
--dport 8500 -j DNAT --to-destination 172.17.0.2:8500 ! -i docker0: iptables: No chain/target/match by that name.
No idea what's going on!!
From this answer:
Something on your system has removed the docker iptables entries that it needs to work. Two fixes have been suggested here:
For CentOS:
sudo service docker restart
sudo service iptables save
And for Ubuntu:
sudo apt-get install iptables-persistent
sudo service docker restart
iptables-save > /etc/iptables/rules.v4 # you may need to "sudo -s" to get a root shell first
After the restart of docker, you should see the docker chain under the nat table:
iptables -t nat -vL