I would like to setup strongswan on my DockerHost in order to allow containers on the leftSubnet which is a docker network subnet to communicate with my rightSubnet in the IPSEC TUNNEL.
10.0.10.0/24 which is my leftSubnet on DockerHost was created using:
docker network create --subnet 10.0.10.0/24
IPSEC IKE Configuration on DockerHost:
conn VPN-DOCKERHOST-REMOTE
authby=secret #this specifies how the connection is authenticated
auto=start #start the connection by default
type=tunnel #the type of connection
left=1.1.1.1 #This is the public ip address of server MAESTRIA
leftsubnet=10.0.10.0/24 #This is the subnet/private ip of server MAESTRIA
right=2.2.2.2 #This is the public ip address of server RESAMUT/remote server
rightsubnet=10.1.1.0/24 #This is the subnet/private ip of server RESAMUT
ike=aes128-sha256-modp3072 #Internet key exchange, type of encryption keyexchange=ikev2 #Internet key exchange version
ikelifetime=28800s #Time before re authentication of keys
esp=aes128-sha256 #Encapsulation security suite of protocols
IPSEC IKE is Up between my DockerHost and the RemoteServer, but I can't ping from my containers to the remote subnet.
I think trafics that match the remote subnet from my container are routed outside of the tunnel because of the iptables or something like that but I can't figure out the problem.
Related
Instead of listening to a single IP address like e.g. localhost:
ports:
- "127.0.0.1:80:80"
I want the container to only listen to a local network, i.e. e.g.:
ports:
- "10.0.0.0/16:80:80"
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.SERVICE.ports contains an invalid type, it should be a number, or an object
Is this possible?
I don't want to use things like swarm mode etc., yet.
If IP range is not supported, maybe at least multiple IP addresses like 10.0.0.2 and 10.0.0.3?
ERROR: for CONTAINER Cannot start service SERVICE: driver failed programming external connectivity on endpoint CONTAINER (...): Error starting userland proxy: listen tcp 10.0.0.3:80: bind: cannot assign requested address
ERROR: for SERVICE Cannot start service SERVICE: driver failed programming external connectivity on endpoint CONTAINER (...): Error starting userland proxy: listen tcp 10.0.0.3:80: bind: cannot assign requested address
Or is it not even supported to listen to 10.0.0.3 ?
The host machine is connected to 10.0.0.0/16:
> ifconfig
ens10: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.0.0.2 netmask 255.255.255.255 broadcast 10.0.0.2
inet6 f**0::8**0:ff:f**9:b**7 prefixlen 64 scopeid 0x20<link>
ether **:00:00:**:**:** txqueuelen 1000 (Ethernet)
Listening to a single IP address seems not correct. The service is listening at an IP address.
Let's say your VM has two network interfaces (ethernet cards):
Network 1 → subnet: 10.0.0.0/24 and IP 10.0.0.100
Network 2 → subnet: 10.0.1.0/24 and IP 10.0.1.200
If you set 127.0.0.1:80:80 that means that your service listening at 127.0.0.1's (localhost) port 80.
If you want to access service from 10.0.0.0/24 subnet you should set 10.0.0.100:80:80 and use http://10.0.0.100:80 address to be able connect your container from external hosts
If you want to access service from multiple networks simultaneously you can bind the container port to multiple ports, where the IP is the connection source IP):
ports:
- 10.0.0.100:80:80
- 10.0.1.200:80:80
- 127.0.0.1:80:80
And don't forget to open 80 port at VM's firewall, if a firewall exists and restricts that network
I think you misunderstood this field.
When you map 127.0.0.1:80:80 you will map interface 127.0.0.1 from your host to your container.
In the case of the 127.0.0.1 you can only access it from inside your host.
When you map 10.0.0.3:80:80 you will map interface 10.0.0.3 from your host to your container. And all ip who can access 10.0.0.3 will have acces to your docker container mapping.
But in anycase this field will not do any filtering about who access this container
EDIT: After your modification i've seen my misunderstood about your question.
You want docker to create "bridge interface" to not share the ip of your host.
I don't think this is possible when using the port mapping
If you give Compose ports: (or docker run -p) an IP address, it must be a specific known IP address of a host interface, or 0.0.0.0 for "all interfaces". The Docker daemon gives this specific IP address to a bind(2) call, which takes an address and not a network, and follows the rules in ip(7) for IPv4.
With the output you show, you can only bind containers to 10.0.0.2. If you want to use other IP addresses on the same network, you also need to assign them to the host; see for example How can I (from CLI) assign multiple IP addresses to one interface? on Ask Ubuntu, and then you can bind a container to the newly-added address.
If your system is on multiple physical networks, you can have any number of ports: so long as the host address and host port are unique. In particular you can have multiple ports: that all forward to the same container port.
ports:
# make this visible to the external load balancer on port 80
- '192.168.17.2:80:3000'
# also make this visible to the internal network also on port 80
- '10.0.0.2:80:3000'
# and the management network but on port 3000
- '10.99.0.36:3000:3000'
Again, the host must already have these IP addresses in the ifconfig output.
I have a bare metal server on Hetzner with IP 5.6.7.8 and 8 additional IPs reserved for me.
IPs: 1.2.3.144 to 1.2.3.151
subnet: 1.2.3.144/29
netmask: 255.255.255.248
broadcast: 1.2.3.151
gateway: 5.6.7.8
Now I want to create Docker network with type macvlan
docker network create macvlan --subnet=1.2.3.144/29 --gateway=5.6.7.8 -o parent=enp0s31f6 macvlan1
But this command causes an error
no matching subnet for gateway 5.6.7.8
Note when I set for example IP 1.2.3.150 and gateway 5.6.7.8 on a virtual machine on the host, it works correctly! but I can't set this none matching gateway in Docker network create command.
I have a problem with WireGuard VPN connection to my office network. I am just testing Wireguard if it can replace OpenVPN (which is working fine).
Both sides are Debian 9.7.
The connection is established between client and server successfully, I can ping and ssh in both directions.
On the server side is attached local network 10.5.5.0/24, the address of the server is 10.5.5.5, and two other computers 10.5.5.100, 10.5.5.200
Server Wireguard Address = 10.0.1.1/24, Client = 10.0.1.3/24
AllowedIPs on Server: 10.0.1.3/32
AllowedIPs on Client: 10.0.1.1/32, 10.5.5.0/24
Routes on the client are set, I can ping the server from a client with 10.0.1.1 and also 10.5.5.5.
I can't ping/access any other computer on 10.5.5.0/24 - (10.5.5.100, 10.5.5.200).
I need to know, if there is a problem with wireguard, Debian or somewhere between chair and keyboard.
Finally ... I figured it out: missing iptables rule:
iptables -t nat -A POSTROUTING -o ens224 -j MASQUERADE
where ens224 is network interface for subnet 10.5.5.0/24
Newbie trying to install/set up Centos 7. Can ping other machines in the domain, but can't ping gateway, google.com etc. Gets destination host unreachable for gateway and unknown host google.com when pinging google.com
Please advice.
etc/sysconfig/network-scripts:
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=enp4s0
iUUID=c39e3407-a566-4586-8fb9-fd4e3bfc4617
DEVICE=enp4s0
ONBOOT=yes
IPADDR="192.168.192.150"
GATEWAY="208.67.254.41"
DNS1="8.8.8.8"
DNS2="8.8.4.4"
etc/resolv.conf
# Generated by NetworkManager
nameserver 8.8.8.8
nameserver 8.8.4.4
etc/sysconfig/network
# Created by anaconda
NETWORKING=yes
HOSTNAME=centos7
GATEWAY=208.67.254.41
Since it says unknown host google.com the machine is not able to route request to internet DNS server(8.8.8.8) to resolve google ip and when you ping the gateway it destination host not reachable
For a machine to connect to other machine their the machine should be within lan if not on lan then there should be a machine which acts a gateway machine within lan in your case you have pointed gateway to 208.67.254.41 obviously it is not on lan so this machine 208.67.254.41 should be accessible from some machine in lan to do so use route command
which add a routing entry in machines routing table
route add -host gw dev
In your case command goes like
route add -host 208.67.254.41 gw dev
eg : route add -host 192.168.12.45 gw 192.168.12.1 dev eth0
Comment entries if ipv6 is not used
Make sure to keep ip forwarding on in the gateway machine in /etc/sysclt.conf on gateway machine
Have you disabled Network Manager?
Command line:
service NetworkManager status
I have been trying to setup a geo replication with glusterfs servers. Everything worked as expected in my test environment, on my staging environment, but then i tried the production and got stuck.
Let say I have
gluster fs server is on public ip 1.1.1.1
gluster fs slave is on public 2.2.2.2, but this IP is on interface eth1
The eth0 on gluster fs slave server is 192.168.0.1.
So when i start the command on 1.1.1.1 (firewall and ssh keys are set properly)
gluster volume geo-replication vol0 2.2.2.2::vol0 create push-pem
I get an error.
Unable to fetch slave volume details. Please check the slave cluster and slave volume.
geo-replication command failed
The error is not that important in this case, the problem is the slave IP address
2015-03-16T11:41:08.101229+00:00 xxx kernel: TCP LOGDROP: IN= OUT=eth0 SRC=1.1.1.1 DST=192.168.0.1 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=24243 DF PROTO=TCP SPT=1015 DPT=24007 WINDOW=14600 RES=0x00 SYN URGP=0
As you can see in the firewall drop log above, the port 24007 of the slave gluster daemon is advertised on private IP of the interface eth0 on slave server and should be the IP of the eth1 private IP. So master cannot connect and will time out
Is there a way to force gluster server to advertise interface eth1 or bind to it only?
I use cfengine and ansible to push configuration, so binding to Interface could be a better solution than IP, but whatever solution will do.
Thank you in advance.
I've encountered this issue but in a different context.
I was trying to geo-replicate two nodes which were both behind a NAT (AWS instances in different regions).
When the master connects to the slave via the public IP to check for volume compatability/size and other details, it retrieves the hostname of the slave, which usually resolves to something that only has meaning in that remote region.
Then it uses that hostname to dial back to the slave when later setting up the session, which fails, as that hostname resolves to a private IP in a different region.
My workaround for the issue was to use hostnames when creating the volumes, probing for peers, and establishing geo replication, and then add a /etc/hosts entry mapping slaves hostname which usually resolves to its private IP to its public IP, rather than it's private IP.
This gets you to the point where you establish a session, but I haven't had any luck actually getting it to sync, as it uses the wrong IP somewhere long the way again.
Edit:
I've actually managed to get it running by adding /etc/hosts hacks on both sides.
GlusterFS has no notion of the network layer. Check your routes. If the next-hop for your geo-replication slave is on eth1, then gluster will open a port on that interface for the slave IP address.
Also make sure your firewall is configured to forward geo-replication traffic on this port.