Set ip for traffic for all traffic incoming outgoing - docker

I have host and 5 IPs set for that host.
I can access host by any of these IPs.
Any connection that was made from that host and dockers too are detected as from IP1
I have a docker on that host that I want to have an IP2. How can I set that docker so when any connection made from that docker to other external servers they get info about IP# instead of IP1.
Thanks!

to achieve this you need to edit the routes on your machine. you can start by running these command to find out the current routes.
$ ip route show
default via 10.1.73.254 dev eth0 proto static metric 100
10.1.73.0/24 dev eth0 proto kernel scope link src 10.1.73.17 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev docker_gwbridge proto kernel scope link src 172.18.0.1
then you need to change the default route like this
ip route add default via ${YOUR_IP} dev eth0 proto static metric 100

WHat have work for me at the end is to create docker network and add to iptables postrouting for that local range...
docker network create bridge-smtp --subnet=192.168.1.0/24 --gateway=192.168.1.1
iptables -t nat -I POSTROUTING -s 192.168.1.0/24 -j SNAT --to-source MYIP_ADD_HERE
docker run --rm --network bridge-smtp byrnedo/alpine-curl http://www.myip.ch

Related

Putting a QEMU guest onto a network created by docker compose

I am trying to emulate a hardware configuration between two Intel NUC servers and a Raspberry Pi server using docker containers and docker compose. Since the latter is ARM, and the test host is x86, I decided to run the RPi image within QEMU encapsulated in one of the docker containers. The docker compose file is pretty simple:
services:
rpi:
build: ./rpi
networks:
iot:
ipv4_address: 192.168.1.3
ether:
ipv4_address: 192.168.2.3
nuc1:
build: ./nuc
networks:
- "iot"
nuc2:
build: ./nuc
networks:
- "ether"
networks:
iot:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.1.0/24
gateway: 192.168.1.1
ether:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.2.0/24
gateway: 192.168.2.1
You'll notice that there are two separate networks, I have two vlans I am trying to emulate and the rpi server is apart of both, hence the docker container is a part of both. I have QEMU running fine on the rpi container, and the docker containers themselves are behaving on the network as intended.
The problem I am having is trying to set up the networking so that the QEMU image appears to be just another node on the network at the addresses 192.168.1.3/192.168.2.3. My requirements are that:
The QEMU guest knows that its own IP on each network is 192.168.1.3 and 192.168.2.3 respectively.
The other NUC docker containers can reach the QEMU image at those IP address (i.e. I don't want to give the docker container running the QEMU image its own IP address, have the other containers hit that IP address, then NAT the address to the QEMU address).
I tried the steps listed on this gist with no luck. Additionally, I tried creating a TAP with an address of 10.0.0.1 on the QEMU docker container, a bound the QEMU guest to at TAP, then creating an IP tables rule to NAT traffic received to the TAP, however, the issue is that then the destination address is 10.0.0.1 and the QEMU guest thinks its own address is 192.168.1.3, so it won't receive the packet.
As you can see, I am a bit stuck conceptually, and need some help with a direction to take this.
Right now, this is the network configuration that I set up to handle traffic on the QEMU container (excuse the lack of consistent ip usage, iproute2 is not installed on the image I am using and I can't seem to get the containers to hit the internet):
brctl addbr br0
ip addr flush dev eth0
ip addr flush dev eth1
brctl addif br0 eth0
brctl addif br0 eth1
tunctl -t tap0 -u $(whoami)
brctl addif br0 tap0
ifconfig br0 up
ifconfig tap0 up
ip addr add 192.168.1.3 dev br0
ip addr add 192.168.2.3 dev br0
ip route add 192.168.1.1/24 dev br0
ip route add 192.168.2.1/24 dev br0
ip addr add 10.0.0.1 dev tap0
Then I've done the following forwarding rules:
iptables -t nat -A PREROUTING -i br0 -d 192.168.1.1 -j DNAT --to 10.0.0.1
iptables -t nat -A POSTROUTING -o br0 -s 10.0.0.1 -j SNAT --to 192.168.1.1
iptables -t nat -A PREROUTING -i br0 -d 192.168.2.1 -j DNAT --to 10.0.0.1
iptables -t nat -A POSTROUTING -o br0 -s 10.0.0.1 -j SNAT --to 192.168.2.1

how to channelize docker container traffic through a hosts network adapter

I have the following network route on my host pc. I am using softether vpn. Its setup on the adapter vpn_se
$ ip route
default via 192.168.43.1 dev wlan0 proto dhcp metric 600
vpn.ip.adress via 192.168.43.1 dev wlan0
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.30.0/24 dev vpn_se proto kernel scope link src 192.168.30.27
192.168.43.0/24 dev wlan0 proto kernel scope link src 192.168.43.103 metric 600
Now I want to route all traffic from a docker container to vpn_se
something like
172.17.0.6 via 192.168.30.1 dev vpn_se
How can i achieve this

Network configuration to support running wireguard inside a docker container

I'd like to run Wireguard inside a container. Long story short my docker instance is old (very / 1.7.0). I've got it built and running and listening for connections:
interface: wg0
public key: JhRxiXF9nlzyMWcxiuPc/PIDWxNryX2FvdbMtcgJ/Eo=
private key: (hidden)
listening port: 32040
peer: iWCu/UNSuf8AXry7ltiL+aNJQcAyXHs8lR5S0dMReX8=
endpoint: 172.58.92.33:35095
allowed ips: 0.0.0.0/0, ::/0
latest handshake: 9 seconds ago
transfer: 3.07 KiB received, 0 B sent
The client can connect to the server (heartbeats are good!), the problem I'm running into is the traffic isn't being properly routed out of the container to the destination and back. I'm guessing this is some sort of DNAT issue.
10.0.10.0/24 dev bond0 proto kernel scope link src 10.0.10.150
10.8.0.1 dev tun0 proto kernel scope link src 10.8.52.36
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42.1
Any guidance / thoughts would be appreciated.
Turns out this needs to be inside the container:
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
Works perfectly after that!

Docker container hits iptables to proxy

I have two VPSs, first machine (proxy from now) is for proxy and second machine (dock from now) is docker host. I want to redirect all traffic generated inside a docker container itself over proxy, to not exposure dock machines public IP.
As connection between VPSs is over internet, no local connection, created a tunnel between them by ip tunnel as follows:
On proxy:
ip tunnel add tun10 mode ipip remote x.x.x.x local y.y.y.y dev eth0
ip addr add 192.168.10.1/24 peer 192.168.10.2 dev tun10
ip link set dev tun10 mtu 1492
ip link set dev tun10 up
On dock:
ip tunnel add tun10 mode ipip remote y.y.y.y local x.x.x.x dev eth0
ip addr add 192.168.10.2/24 peer 192.168.10.1 dev tun10
ip link set dev tun10 mtu 1492
ip link set dev tun10 up
PS: Do not know if ip tunnel can be used for production, it is another question, anyway planning to use libreswan or openvpn as a tunnel between VPSs.
After, added SNAT rules to iptables on both VPSs and some routing rules as follows:
On proxy:
iptables -t nat -A POSTROUTING -s 192.168.10.2/32 -j SNAT --to-source y.y.y.y
On dock:
iptables -t nat -A POSTROUTING -s 172.27.10.0/24 -j SNAT --to-source 192.168.10.2
ip route add default via 192.168.10.1 dev tun10 table rt2
ip rule add from 192.168.10.2 table rt2
And last but not least created a docker network with one test container attached to it as follows:
docker network create --attachable --opt com.docker.network.bridge.name=br-test --opt com.docker.network.bridge.enable_ip_masquerade=false --subnet=172.27.10.0/24 testnet
docker run --network testnet alpine:latest /bin/sh
Unfortunately all these ended with no success. So the question is how to debug that? Is it correct way? How would you do the redirection over proxy?
Some words about theory: Traffic coming from 172.27.10.0/24 subnet hits iptables SNAT rule, source IP changes to 192.168.10.2. By routing rule it routes over tun10 device, it is the tunnel. And hits another iptables SNAT rule that changes IP to y.y.y.y and finally goes to destination.

Setting up 4 containers with 4 IPs and 2 interfaces on EC2

I am trying to set up 4 containers(with nginx) in a system with 4 IPs and 2 interfaces. Can someone please help me? For now only 3 containers are accessible. 4th one is timing out when tried to access from the browser instead of showing a welcome page. I have given the ip routes needed
Host is Ubuntu.
So when this happened I thought it had something to do with the ip routes. So in the same system I installed apache and created 4 virtual hosts each listening to different IPs and with different document routes.
When checked all the IPs were accessible and showed the correct documents.
So now I am stuck, what do I do now!
Configuration:
4 IPs and 2 interfaces. So I created 2 IP aliases. All IPs are configured by the /etc/network/interfaces except the first one. eth0 is is set to dhcp mode.
auto eth0:1
iface eth0:1 inet static
address 172.31.118.182
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 172.31.119.23
netmask 255.255.255.0
auto eth1:1
iface eth1:1 inet static
address 172.31.119.11
netmask 255.255.255.0
It goes like this. The IPs are private IPs, so I guess there is no problem sharing it here.
eth0 - 172.31.118.249
eth0:1 - 172.31.118.182
eth1 - 172.31.119.23
eth1:1 - 172.31.119.11
Now the docker creation commands
All are just basic nginx containers, so when working it will show the default nginx page.
sudo docker create -i -t -p 172.31.118.249:80:80 --name web1 web_fresh
sudo docker create -i -t -p 172.31.118.182:80:80 --name web2 web_fresh
sudo docker create -i -t -p 172.31.119.23:80:80 --name web3 web_fresh
sudo docker create -i -t -p 172.31.119.11:80:80 --name web4 web_fresh
sudo docker start web1
sudo docker start web2
sudo docker start web3
sudo docker start web4
--
Now here web1 & web2 become immediately accessible. But the containers running on eth1 and eth1:1 are not. So I figured iproutes must be the issue and went ahead and added some routes.
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.23 table eth1
ip route add default via 172.31.119.1 table eth1
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.11 table eth11
ip route add default via 172.31.119.1 table eth11
ip rule add from 172.31.119.23 lookup eth1 prio 1002
ip rule add from 172.31.119.11 lookup eth11 prio 1003
This made web3 also accessible. But not the one from eth1:1. So here is where I am stuck at the moment.

Resources