Hi I need to assign a specific ip for each docker container for my test automation program called sipp.
I cannot ping or telnet to 192.168.173.215
Here is my configration:
version: '3.3'
services:
sipp4:
build:
context: .
dockerfile: Dockerfile
container_name: sipp4
networks:
mynetwork:
ipv4_address: 192.168.128.2
volumes:
- ./sipp-3.4.1/:/opt/app/sipp
environment:
- "TZ=America/Los_Angeles"
ulimits:
nofile:
soft: 200000
hard: 400000
working_dir: /opt/app/sipp
command: 192.168.173.215:5060 -sf callerCall.xml -inf callerCall.csv -i 192.168.128.2 -aa -m 1 -trace_msg -t un -skip_rlimit -trace_err
networks:
mynetwork:
ipam:
driver: default
config:
- subnet: 192.168.128.0/18
gateway: 192.168.128.200
I am sure about subnet and gateway because I can assign IP with VMware virtual host.
Here is ifconfig inside docker machine (bash)
ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.128.2 netmask 255.255.192.0 broadcast 192.168.191.255
ether 02:42:c0:a8:80:02 txqueuelen 0 (Ethernet)
RX packets 7 bytes 586 (586.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5 bytes 210 (210.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 3 bytes 1728 (1.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3 bytes 1728 (1.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Here is ip
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
389: eth0#if390: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:80:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.128.2/18 brd 192.168.191.255 scope global eth0
valid_lft forever preferred_lft forever
on the other hand when I use the configuration below, it can ping and access to 192.168.173.215 and auto assigning IP is: 172.17.0.1
sipp1:
build:
context: .
dockerfile: Dockerfile
container_name: sipp1
network_mode: host
volumes:
- ./sipp-3.4.1/:/opt/app/sipp
environment:
- "TZ=America/Los_Angeles"
ulimits:
nofile:
soft: 200000
hard: 400000
working_dir: /opt/app/sipp
command: ./sipp 192.168.173.215:5060 -sf callerCall.xml -inf callerCall.csv -i 172.17.0.1 -aa -m 1 -trace_msg -t un -skip_rlimit -trace_err
When I use the configuration below, its getting ip: 172.18.0.2 and cannot ping anywhere again
sipp4:
build:
context: .
dockerfile: Dockerfile
container_name: sipp4
volumes:
- ./sipp-3.4.1/:/opt/app/sipp
environment:
- "TZ=America/Los_Angeles"
ulimits:
nofile:
soft: 200000
hard: 400000
working_dir: /opt/app/sipp
command: 192.168.173.215:5060 -sf callerCall.xml -inf callerCall.csv -aa -m 1 -trace_msg -t un -skip_rlimit -trace_err
Related
I want to try and reproduce (for network simulation purposes) a network dumbell using Docker and Docker-compose. In order to do this, I declare 3 internal networks in my docker-compose.yml file:
usrnet (172.20.10.0/24)
backbone (172.20.250.0/24)
srvnet (172.20.20.0/24)
I also declare multiple containers:
usr1, in usrnet (172.20.10.101)
usr2, in usrnet (172.20.10.102)
r1, in usrnet (172.20.10.2) AND backbone (172.20.250.2)
r2, in srvnet (172.20.20.2) AND backbone (172.20.250.3)
srv1, in srvnet (172.20.20.101)
srv2 in srvnet (172.20.20.102)
Then, inside each container, I set the routing rules properly (using ip route add ...) so that packets flow directly through containers and not through the host gateway. For instance:
root#r1:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
21: eth0#if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:14:fa:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.20.250.2/24 brd 172.20.250.255 scope global eth0
valid_lft forever preferred_lft forever
25: eth1#if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:14:0a:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.20.10.2/24 brd 172.20.10.255 scope global eth1
valid_lft forever preferred_lft forever
root#r1:/# ip route list
default via 172.20.250.1 dev eth0
172.20.10.0/24 dev eth1 proto kernel scope link src 172.20.10.2
172.20.20.0/24 via 172.20.250.3 dev eth0
172.20.250.0/24 dev eth0 proto kernel scope link src 172.20.250.2
The problem is, when I try to ping for instance srv1 from usr1, the packet source IP keeps getting "masqueraded" as the host gateway addresses:
tcpdump on usr1 shows IP packets 172.20.10.101 > 172.20.20.101 (as it should be)
tcpdump on r1 shows IP packets 172.20.10.1 > 172.20.20.101 (masqueraded by usrnet gateway ?)
tcpdump on r2 shows IP packets 172.20.250.1 > 172.20.20.101 (masqueraded by backbone gateway ?)
tcpdump on srv1 shows IP packets 172.20.20.1 > 172.20.20.101 (masqueraded by srvnet gateway ?)
So srv1 answers to 172.20.20.1 (as it is now the source IP of the ICMP echo packet) and the reply is not forwarded back to usr1.
I suspect this has to do with docker's iptables/nftables rules. Indeed, nft flush ruleset (on the host), while being a terrible idea, does the trick and my containers can communicate in the intended way.
Is there a "cleaner" way than disabling nft all together ?
Appendix : minimal docker-compose.yml setup to reproduce
version: "3.9"
services:
usr1:
image: weibeld/ubuntu-networking:latest
cap_add:
- NET_ADMIN
command: sleep 10000
networks:
usrnet:
ipv4_address: 172.20.10.101
usr2:
image: weibeld/ubuntu-networking:latest
cap_add:
- NET_ADMIN
command: sleep 10000
networks:
usrnet:
ipv4_address: 172.20.10.102
r1:
image: weibeld/ubuntu-networking:latest
cap_add:
- NET_ADMIN
- SYS_MODULE
sysctls:
- net.ipv4.ip_forward=1
command: sleep 10000
networks:
usrnet:
ipv4_address: 172.20.10.2
backbone:
ipv4_address: 172.20.250.2
r2:
image: weibeld/ubuntu-networking:latest
cap_add:
- NET_ADMIN
- SYS_MODULE
sysctls:
- net.ipv4.ip_forward=1
command: sleep 10000
networks:
srvnet:
ipv4_address: 172.20.20.2
backbone:
ipv4_address: 172.20.250.3
srv1:
image: weibeld/ubuntu-networking:latest
cap_add:
- NET_ADMIN
command: sleep 10000
networks:
srvnet:
ipv4_address: 172.20.20.101
srv2:
image: weibeld/ubuntu-networking:latest
cap_add:
- NET_ADMIN
command: sleep 10000
networks:
srvnet:
ipv4_address: 172.20.20.102
networks:
backbone:
internal: true
ipam:
config:
- subnet: 172.20.250.0/24
usrnet:
internal: true
ipam:
config:
- subnet: 172.20.10.0/24
srvnet:
internal: true
ipam:
config:
- subnet: 172.20.20.0/24
After some digging, I managed to make it work on a freshly-installed Debian virtual machine, using exactly this docker-compose.yml file, by tuning iptables rules.
I flushed the DOCKER-ISOLATION-STAGE-1 chain, put a single RETURN rule in it, and then changed the FORWARD chain policy to ACCEPT.
$ sudo nft flush chain ip filter DOCKER-ISOLATION-STAGE-1
$ sudo nft add rule ip filter DOCKER-ISOLATION-STAGE-1 return
$ sudo iptables -P FORWARD ACCEPT
I could have refined this a bit more, but this was sufficient to let me achieve what I wanted.
I'm trying to "dockerize" a net application.
My database is in the company servers and we connect to them through VPN with double factor authentication. I can do this correctly with no problem.
My app runs correctly without Docker and I can access it with my app and other tools like SSMS.
The problem comes when I try to run the app from a docker container. Here is my docker compose:
services:
orenes.procedimientos.firma.api:
environment:
- TZ=CET
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:6555
- ASPNETCORE_ConnectionStrings__FirmasConnString=Server=10.1.33.34;Database=YYYY;User ID=ZZZZ;password=******
image: ${DOCKER_REGISTRY-}orenesprocedimientosfirmaapi
extra_hosts:
- "SV-GORDEVSQL:10.1.33.34"
build:
context: ../../
dockerfile: Orenes.Procedimientos.Firma.API/Dockerfile
network: host
ports:
- 6555:6555
networks:
- vpn
networks:
vpn:
ipam:
config:
- subnet: 10.1.0.0/20
If I go inside the container and try to do a ping towards the server URL, I receive
From 10.1.0.1 icmp_seq=2 Destination Host Unreachable
This is the output of the ifconfig command:
th0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.0.2 netmask 255.255.255.0 broadcast 10.1.0.255
ether 02:42:0a:01:00:02 txqueuelen 0 (Ethernet)
RX packets 330 bytes 394185 (384.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 278 bytes 16675 (16.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 15 bytes 1300 (1.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 15 bytes 1300 (1.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Running Docker Desktop on Windows 10.
If someone can help, I will be really grateful.
Thx!
I have a docker heartbeat container up and running from where a connection should be made towards an ipv6 endpoint.
From in the heartbeat container the ping6 command doesn't succeed, from on the host it is working.
In container
sh-4.2$ ping6 ipv6.google.com
PING ipv6.google.com(ams15s32-in-x0e.1e100.net (2a00:1450:400e:809::200e)) 56 data bytes
^C
on vm
[root#myserver myuser]# ping6 ipv6.google.com
PING ipv6.google.com(ams15s30-in-x0e.1e100.net (2a00:1450:400e:807::200e)) 56 data bytes
64 bytes from ams15s30-in-x0e.1e100.net (2a00:1450:400e:807::200e): icmp_seq=1 ttl=120 time=6.55 ms
64 bytes from ams15s30-in-x0e.1e100.net (2a00:1450:400e:807::200e): icmp_seq=2 ttl=120 time=6.60 ms
I've configured the daemon.json file with the subnet and the docker-compose file takes care of the preparation of the ipv6 network
version: "2.2"
services:
heartbeat:
image: docker.elastic.co/beats/heartbeat:7.10.1
container_name: "heartbeat"
volumes:
- "./elastic/heartbeat.yml:/usr/share/heartbeat/heartbeat.yml:ro"
- "./elastic/monitor.d/:/usr/share/heartbeat/monitor.d/:ro"
networks:
- beats
networks:
beats:
enable_ipv6: true
driver: bridge
ipam:
driver: default
config:
- subnet: 2a02:1800:1e0:408f::806:0/112
- gateway: 2a02:1800:1e0:408f::806:1
The docker network ls shows the network correctly setup
docker network ls
NETWORK ID NAME DRIVER SCOPE
...
328408216a9f deployments_beats bridge local
...
And the bridged network is appearing in the ifconfig overview with following info
br-328408216a9f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.19.0.1 netmask 255.255.0.0 broadcast 172.19.255.255
inet6 2a02:1800:1e0:408f::806:1 prefixlen 112 scopeid 0x0<global>
inet6 fe80::1 prefixlen 64 scopeid 0x20<link>
inet6 fe80::42:52ff:fe98:e176 prefixlen 64 scopeid 0x20<link>
ether 02:42:52:98:e1:76 txqueuelen 0 (Ethernet)
RX packets 8 bytes 656 (656.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7 bytes 746 (746.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Anything I've missed during the setup?
You also need to enable ipv6 on the docker engine:
Edit /etc/docker/daemon.json, set the ipv6 key to true and the fixed-cidr-v6 key to your IPv6 subnet. In this example we are setting
it to 2001:db8:1::/64.
{
"ipv6": true,
"fixed-cidr-v6": "2001:db8:1::/64"
}
Save the file.
Reload the Docker configuration file.
$ systemctl reload docker
https://docs.docker.com/config/daemon/ipv6/
Solved by using https://github.com/robbertkl/docker-ipv6nat
Added the container to my docker setup
my daemon.json file in /etc/docker/
{
"ipv6": true,
"fixed-cidr-v6": "fd00::/64"
}
which will use the unique local subnet
in my docker-compose I create a ipv6 network
networks:
beats:
enable_ipv6: true
driver: bridge
ipam:
driver: default
config:
- subnet: fd00:1::/80
note the prefix 1 I'm using in the range
add your container to the network, and it works
I am currently using tinc to create a VPN between two servers. This allows me from server A to access B through the IP address 10.0.0.2 and creates an interface:
tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500
inet 10.0.0.1 netmask 255.255.255.0 destination 10.0.0.1
inet6 fe80::babb:cc53:dd5e:23f8 prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 42 bytes 11987 (11.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 55 bytes 7297 (7.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I would like to pass this route to my docker container on server A:
version: '3.2'
services:
traefik:
image: "traefik:v2.2.0"
ports:
- "80:80"
- "443:443"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./acme:/acme"
- "./traefik.toml:/traefik.toml"
- "./rules:/etc/traefik/rules"
networks:
- traefik
deploy:
placement:
constraints:
- node.role == manager
networks:
traefik:
external: true
Currently inside the container traefik I can ping 10.0.0.2 but it is a different host completely.
If I remove:
networks:
traefik:
external: true
and add network_mode: host within the traefik service I can route to 10.0.02 but then I cannot access other containers which share the traefik network.
If I try and put them both together I get the error:
'network_mode' and 'networks' cannot be combined
In other words how can I create the dashed line connection?
This also depicts my problem in that container B can't be in both networks at once.
I added Server A just as a more real world example of a swarm.
A solution I came up with was not use tinc at all and use autossh to effectively port forward with a command like this:
autossh -M 43585 -o "compression=no" -o "cipher=aes128-gcm#openssh.com" -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -NR 3000:localhost:3000 root#serverA
Ran on server B (first image).
This means that I can then route from the container to Server B via http://serverA:3000 for example.
Once I create the container using docker-compose up -d the containers are up and running but they are not available in the local network (127.0.0.1).
I use the same project in another PC and still working.. So the docker-compose.yml is the same and it's working.
~ → docker info
Containers: 6
Running: 1
Paused: 0
Stopped: 5
Images: 19
Server Version: 18.03.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfd04396dc68220d1cecbe686a6cc3aa5ce3667c
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.31-1-MANJARO
Operating System: Manjaro Linux
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 15.67GiB
Name: phantom
ID: JO4V:TAN5:64SP:5VRL:RUOQ:ZRTX:SUGL:T5NF:IXB7:YHS6:2CA6:3HCT
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Checking the network cards seems they are all properly set
~ → ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 2c:fd:a1:73:7e:38 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.120/24 brd 192.168.1.255 scope global dynamic noprefixroute enp6s0
valid_lft 80202sec preferred_lft 80202sec
3: br-0e93106ef232: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:74:c6:77:24 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global br-0e93106ef232
valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:40:ad:aa:5b brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
74: veth73892ae#if73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether ce:fd:5c:af:d2:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0
Looking at the iptable rules, seems nothing is blocking the connection with the container.
Note: Just to be sure it wasn't creating any conflict, I disabled the IPv6, but nothing changed.
Here the docker-compose.yml file:
version: "3.1"
services:
redis:
image: redis:alpine
container_name: proj-redis
rabbitmq:
image: rabbitmq:alpine
container_name: proj-rabbitmq
mysql:
image: mysql:8.0
container_name: proj-mysql
working_dir: /application
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=database
- MYSQL_USER=database
- MYSQL_PASSWORD=database
webserver:
image: nginx:alpine
container_name: proj-webserver
working_dir: /application
volumes:
- ./htdoc:/application
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
- "9003:9003" # xDebug
- "15672:15672" # RabbitMQ
links:
- php-fpm
php-fpm:
build:
context: .
dockerfile: phpdocker/php-fpm/Dockerfile
container_name: proj-php-fpm
working_dir: /application
environment:
XDEBUG_CONFIG: "remote_host=172.21.0.1"
PHP_IDE_CONFIG: "serverName=dev.local"
volumes:
- ./htdoc:/application
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.0/fpm/conf.d/99-overrides.ini
links:
- mysql
- rabbitmq