I've been following the manual here, but I'm having trouble getting docker to use the new bridge.
I've added the following to /etc/default/docker and /etc/sysconfig/docker but as soon as I start the docker service it continues to use the docker0 bridge.
The established docker0 IP range blocks many internal IPs on my network. I simply want to configure it to use a 192.168.5.0/24 range.
$ netstat -r
Destination Gateway Genmask Flags MSS Window irtt Iface
192.168.5.0 0.0.0.0 255.255.255.0 U 0 0 0 bridge0
Found out what I'm doing wrong.
The vendor suggests creating a conf file under /etc/systemd/system/docker.service.d to override directives and preserving the docker.service
# cd /etc/systemd/system/
# mkdir docker.service.d
# cd docker.service.d
# vi override.conf
Then add the following directives (the ExecStart twice to reset the first in /usr/lib/systemd/system/docker.service):
[Service]
EnvironmentFile=-/etc/sysconfig/docker
ExecStart=
ExecStart=/usr/bin/docker -d -H fd:// $DOCKER_OPTS
Then daemon-reload and start docker
# systemctl daemon-reload
# systemctl start docker
systemd Documentation
Related
I'm trying to understand how TPROXY works in an effort to build a transparent proxy for Docker containers.
After lots of research I managed to create a network namespace, inject an veth interface into it and add TPROXY rules. The following script worked on a clean Ubuntu 18.04.3:
ip netns add ns0
ip link add br1 type bridge
ip link add veth0 type veth peer name veth1
ip link set veth0 master br1
ip link set veth1 netns ns0
ip addr add 192.168.3.1/24 dev br1
ip link set br1 up
ip link set veth0 up
ip netns exec ns0 ip addr add 192.168.3.2/24 dev veth1
ip netns exec ns0 ip link set veth1 up
ip netns exec ns0 ip route add default via 192.168.3.1
iptables -t mangle -A PREROUTING -i br1 -p tcp -j TPROXY --on-ip 127.0.0.1 --on-port 1234 --tproxy-mark 0x1/0x1
ip rule add fwmark 0x1 tab 30
ip route add local default dev lo tab 30
After that I launched a toy Python server from Cloudflare blog:
import socket
IP_TRANSPARENT = 19
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.setsockopt(socket.IPPROTO_IP, IP_TRANSPARENT, 1)
s.bind(('127.0.0.1', 1234))
s.listen(32)
print("[+] Bound to tcp://127.0.0.1:1234")
while True:
c, (r_ip, r_port) = s.accept()
l_ip, l_port = c.getsockname()
print("[ ] Connection from tcp://%s:%d to tcp://%s:%d" % (r_ip, r_port, l_ip, l_port))
c.send(b"hello world\n")
c.close()
And finally by running ip netns exec ns0 curl 1.2.4.8 I was able to observe a connection from 192.168.3.2 to 1.2.4.8 and receive the "hello world" message.
The problem is that it seems to have compatibility issues with Docker. All worked well in a clean environment, but once I start Docker things start to go wrong. It seems like the TPROXY rule was no longer working. Running ip netns exec ns0 curl 192.168.3.1 gave "Connection reset" and running ip netns exec ns0 curl 1.2.4.8 timed out (both should have produced the "hello world" message). I tried restoring all iptables rules, deleting ip routes and rules generated by Docker and shutting down Docker, but none worked even if I didn't configure any networks or containers.
What is happening behind the scenes and how can I get TPROXY working normally?
I traced all processes created by Docker using strace -f dockerd, and looked for lines containing exec. Most commands are iptables commands, which I have already excluded, and the lines with modprobe looked interesting. I loaded these modules one by one and figured out that the module causing the trouble is br_netfilter.
The module enables filtering of bridged packets through iptables, ip6tables and arptables. The iptables part can be disabled by executing echo "0" | sudo tee /proc/sys/net/bridge/bridge-nf-call-iptables. After executing the command, the script worked again without impacting Docker containers.
I am still confused though. I haven't understood the consequences of such a setting. I enabled packet tracing, but it seems that the packets matched the exact same set of rules before and after enabling bridge-nf-call-iptables, but in the former case the first TCP SYN packet got delivered to the Python server, in the latter case the packet got dropped for unknown reasons.
Try running docker with -p 1234
"By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag."
https://docs.docker.com/config/containers/container-networking/
I'm trying to run
/usr/bin/docker run --rm -v /var/data/redis:/data -v /var/data/conf/redis.conf:/usr/local/etc/redis/redis.conf --name redis -p 6379:6379 redis:5.0.3-alpine3.9
but I get:
/usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint redis (f16f19b7727a710fb6c96be566dac66ce26282982960d97faa28861c24fcf2fb): Bind for 0.0.0.0:6379 failed: port is already allocated.
When I try to check the ports used with netstat, I get:
[root#artik ~]# netstat -nlpute | grep 6379
tcp6 0 0 :::6379 :::* LISTEN 0 14384 2471/docker-proxy
I have no docker containers running right now.
I don't understand this issue, what should I do ?
[root#artik ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Steps I had to take to get everything working:
sudo service docker stop
sudo rm /var/lib/docker/network/files/local-kv.db
sudo service docker start
docker system prune
And then try again.
From your netstat output its clear that there is one process holding port 6379
[root#artik ~]# netstat -nlpute | grep 6379
tcp6 0 0 :::6379 :::* LISTEN 0 14384 2471/docker-proxy
docker-proxy processes are created when you do port forwarding in docker run which is true in your case -p 6379:6379.
For more info on docker-proxy check this out.
I suspect that you earlier ran a redis container which used port 6379, but that container was not properly deleted which kept process docker-proxy running and hence you got port is already allocated
Hope this helps.
As DannyMoshe suggested for anyone else.
Try this before you potentially mess up your whole setup::
sudo service docker stop
sudo service docker start
remove the ports - ... in the docker-compose file and let it assign by itself. or change the port mapping in the host from 6379:6379 to 6378:6379 that worked for me. Before doing this you may need to clear already started containers. docker rm -f $(docker ps -a -q)
I have attached to a docker container and need to find out the number of sockets being open by java application . Unfortunately there is no lsof or netstat available in the container . There is no data in /proc/PID/net/tcp. Is there any way I can find this data?
I like netshoot for this. You can run a container in the same networking and even pid namespace, and use the tools in netshoot to analyze the other container's network:
$ docker run -d -p 8888:80 --name nginx-test nginx
d8a90f5c7d1744483ae6d26cc97dad222ed237b5c4211f711c9f15f88252897f
$ docker run --net container:nginx-test --pid container:nginx-test -it --rm nicolaka/netshoot
/ # netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1/nginx: master pro
/ # ps -ef
PID USER TIME COMMAND
1 root 0:00 nginx: master process nginx -g daemon off;
7 104 0:00 nginx: worker process
8 root 0:00 sh
15 root 0:00 ps -ef
Alternatively, you can see this: /proc/PID/net/tcp in the host machine as long as you are in the same box as the docker daemon. This is less elegant than #BMitch's answer.
What you need to do is find out the PID of your process outside the container (in the main pid namespace, technically speaking, your host).
ps aux | grep java
Inside your container, your java has a pid; but outside it has another pid that you can use to access to the information that you have requested: /proc/PID/net/tcp
The docker daemon container is isolated from outside when we run it below,
$ docker run -d --name test_container ubuntu/ping \
/bin/sh -c "while true do echo hello world; sleep 1; done"
$ docker inspect test_container | grep IPAddress
[ip of test_container]
$ ping [ip of test_container]
[timeout]
$ ifconfig docker0 | grep "inet addr"
[ip of docker bridge]
$ ping [ip of docker bridge]
[ok]
$ docker exec -it test_container /bin/bash
# ping [ip of test_container]
[ok]
# ping [ip of docker bridge]
[timeout]
How to open the ip address of the docker daemon container inside out?
By default docker daemon is running on a unix socket
You can enable to listen on tcp socket by doing :
docker daemon -H tcp://validIpOnYourHost:port
By default port is 2375 if you do not provide some.
cf this page for more explanation : https://docs.docker.com/v1.11/engine/reference/commandline/daemon/
Be careful, if you expose docker throught TCP, this is not security enabled.
Probably I replied to something else, after reading your question :
could you do a :
docker network inspect bridge
and paste the json output.
I had similar issues when the attribute :
"com.docker.network.bridge.enable_ip_masquerade"
was set to false
I am trying to set up 4 containers(with nginx) in a system with 4 IPs and 2 interfaces. Can someone please help me? For now only 3 containers are accessible. 4th one is timing out when tried to access from the browser instead of showing a welcome page. I have given the ip routes needed
Host is Ubuntu.
So when this happened I thought it had something to do with the ip routes. So in the same system I installed apache and created 4 virtual hosts each listening to different IPs and with different document routes.
When checked all the IPs were accessible and showed the correct documents.
So now I am stuck, what do I do now!
Configuration:
4 IPs and 2 interfaces. So I created 2 IP aliases. All IPs are configured by the /etc/network/interfaces except the first one. eth0 is is set to dhcp mode.
auto eth0:1
iface eth0:1 inet static
address 172.31.118.182
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 172.31.119.23
netmask 255.255.255.0
auto eth1:1
iface eth1:1 inet static
address 172.31.119.11
netmask 255.255.255.0
It goes like this. The IPs are private IPs, so I guess there is no problem sharing it here.
eth0 - 172.31.118.249
eth0:1 - 172.31.118.182
eth1 - 172.31.119.23
eth1:1 - 172.31.119.11
Now the docker creation commands
All are just basic nginx containers, so when working it will show the default nginx page.
sudo docker create -i -t -p 172.31.118.249:80:80 --name web1 web_fresh
sudo docker create -i -t -p 172.31.118.182:80:80 --name web2 web_fresh
sudo docker create -i -t -p 172.31.119.23:80:80 --name web3 web_fresh
sudo docker create -i -t -p 172.31.119.11:80:80 --name web4 web_fresh
sudo docker start web1
sudo docker start web2
sudo docker start web3
sudo docker start web4
--
Now here web1 & web2 become immediately accessible. But the containers running on eth1 and eth1:1 are not. So I figured iproutes must be the issue and went ahead and added some routes.
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.23 table eth1
ip route add default via 172.31.119.1 table eth1
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.11 table eth11
ip route add default via 172.31.119.1 table eth11
ip rule add from 172.31.119.23 lookup eth1 prio 1002
ip rule add from 172.31.119.11 lookup eth11 prio 1003
This made web3 also accessible. But not the one from eth1:1. So here is where I am stuck at the moment.