I have a docker heartbeat container up and running from where a connection should be made towards an ipv6 endpoint.
From in the heartbeat container the ping6 command doesn't succeed, from on the host it is working.
In container
sh-4.2$ ping6 ipv6.google.com
PING ipv6.google.com(ams15s32-in-x0e.1e100.net (2a00:1450:400e:809::200e)) 56 data bytes
^C
on vm
[root#myserver myuser]# ping6 ipv6.google.com
PING ipv6.google.com(ams15s30-in-x0e.1e100.net (2a00:1450:400e:807::200e)) 56 data bytes
64 bytes from ams15s30-in-x0e.1e100.net (2a00:1450:400e:807::200e): icmp_seq=1 ttl=120 time=6.55 ms
64 bytes from ams15s30-in-x0e.1e100.net (2a00:1450:400e:807::200e): icmp_seq=2 ttl=120 time=6.60 ms
I've configured the daemon.json file with the subnet and the docker-compose file takes care of the preparation of the ipv6 network
version: "2.2"
services:
heartbeat:
image: docker.elastic.co/beats/heartbeat:7.10.1
container_name: "heartbeat"
volumes:
- "./elastic/heartbeat.yml:/usr/share/heartbeat/heartbeat.yml:ro"
- "./elastic/monitor.d/:/usr/share/heartbeat/monitor.d/:ro"
networks:
- beats
networks:
beats:
enable_ipv6: true
driver: bridge
ipam:
driver: default
config:
- subnet: 2a02:1800:1e0:408f::806:0/112
- gateway: 2a02:1800:1e0:408f::806:1
The docker network ls shows the network correctly setup
docker network ls
NETWORK ID NAME DRIVER SCOPE
...
328408216a9f deployments_beats bridge local
...
And the bridged network is appearing in the ifconfig overview with following info
br-328408216a9f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.19.0.1 netmask 255.255.0.0 broadcast 172.19.255.255
inet6 2a02:1800:1e0:408f::806:1 prefixlen 112 scopeid 0x0<global>
inet6 fe80::1 prefixlen 64 scopeid 0x20<link>
inet6 fe80::42:52ff:fe98:e176 prefixlen 64 scopeid 0x20<link>
ether 02:42:52:98:e1:76 txqueuelen 0 (Ethernet)
RX packets 8 bytes 656 (656.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7 bytes 746 (746.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Anything I've missed during the setup?
You also need to enable ipv6 on the docker engine:
Edit /etc/docker/daemon.json, set the ipv6 key to true and the fixed-cidr-v6 key to your IPv6 subnet. In this example we are setting
it to 2001:db8:1::/64.
{
"ipv6": true,
"fixed-cidr-v6": "2001:db8:1::/64"
}
Save the file.
Reload the Docker configuration file.
$ systemctl reload docker
https://docs.docker.com/config/daemon/ipv6/
Solved by using https://github.com/robbertkl/docker-ipv6nat
Added the container to my docker setup
my daemon.json file in /etc/docker/
{
"ipv6": true,
"fixed-cidr-v6": "fd00::/64"
}
which will use the unique local subnet
in my docker-compose I create a ipv6 network
networks:
beats:
enable_ipv6: true
driver: bridge
ipam:
driver: default
config:
- subnet: fd00:1::/80
note the prefix 1 I'm using in the range
add your container to the network, and it works
Related
I'm trying to "dockerize" a net application.
My database is in the company servers and we connect to them through VPN with double factor authentication. I can do this correctly with no problem.
My app runs correctly without Docker and I can access it with my app and other tools like SSMS.
The problem comes when I try to run the app from a docker container. Here is my docker compose:
services:
orenes.procedimientos.firma.api:
environment:
- TZ=CET
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:6555
- ASPNETCORE_ConnectionStrings__FirmasConnString=Server=10.1.33.34;Database=YYYY;User ID=ZZZZ;password=******
image: ${DOCKER_REGISTRY-}orenesprocedimientosfirmaapi
extra_hosts:
- "SV-GORDEVSQL:10.1.33.34"
build:
context: ../../
dockerfile: Orenes.Procedimientos.Firma.API/Dockerfile
network: host
ports:
- 6555:6555
networks:
- vpn
networks:
vpn:
ipam:
config:
- subnet: 10.1.0.0/20
If I go inside the container and try to do a ping towards the server URL, I receive
From 10.1.0.1 icmp_seq=2 Destination Host Unreachable
This is the output of the ifconfig command:
th0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.0.2 netmask 255.255.255.0 broadcast 10.1.0.255
ether 02:42:0a:01:00:02 txqueuelen 0 (Ethernet)
RX packets 330 bytes 394185 (384.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 278 bytes 16675 (16.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 15 bytes 1300 (1.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 15 bytes 1300 (1.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Running Docker Desktop on Windows 10.
If someone can help, I will be really grateful.
Thx!
Hi I need to assign a specific ip for each docker container for my test automation program called sipp.
I cannot ping or telnet to 192.168.173.215
Here is my configration:
version: '3.3'
services:
sipp4:
build:
context: .
dockerfile: Dockerfile
container_name: sipp4
networks:
mynetwork:
ipv4_address: 192.168.128.2
volumes:
- ./sipp-3.4.1/:/opt/app/sipp
environment:
- "TZ=America/Los_Angeles"
ulimits:
nofile:
soft: 200000
hard: 400000
working_dir: /opt/app/sipp
command: 192.168.173.215:5060 -sf callerCall.xml -inf callerCall.csv -i 192.168.128.2 -aa -m 1 -trace_msg -t un -skip_rlimit -trace_err
networks:
mynetwork:
ipam:
driver: default
config:
- subnet: 192.168.128.0/18
gateway: 192.168.128.200
I am sure about subnet and gateway because I can assign IP with VMware virtual host.
Here is ifconfig inside docker machine (bash)
ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.128.2 netmask 255.255.192.0 broadcast 192.168.191.255
ether 02:42:c0:a8:80:02 txqueuelen 0 (Ethernet)
RX packets 7 bytes 586 (586.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5 bytes 210 (210.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 3 bytes 1728 (1.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3 bytes 1728 (1.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Here is ip
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
389: eth0#if390: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:80:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.128.2/18 brd 192.168.191.255 scope global eth0
valid_lft forever preferred_lft forever
on the other hand when I use the configuration below, it can ping and access to 192.168.173.215 and auto assigning IP is: 172.17.0.1
sipp1:
build:
context: .
dockerfile: Dockerfile
container_name: sipp1
network_mode: host
volumes:
- ./sipp-3.4.1/:/opt/app/sipp
environment:
- "TZ=America/Los_Angeles"
ulimits:
nofile:
soft: 200000
hard: 400000
working_dir: /opt/app/sipp
command: ./sipp 192.168.173.215:5060 -sf callerCall.xml -inf callerCall.csv -i 172.17.0.1 -aa -m 1 -trace_msg -t un -skip_rlimit -trace_err
When I use the configuration below, its getting ip: 172.18.0.2 and cannot ping anywhere again
sipp4:
build:
context: .
dockerfile: Dockerfile
container_name: sipp4
volumes:
- ./sipp-3.4.1/:/opt/app/sipp
environment:
- "TZ=America/Los_Angeles"
ulimits:
nofile:
soft: 200000
hard: 400000
working_dir: /opt/app/sipp
command: 192.168.173.215:5060 -sf callerCall.xml -inf callerCall.csv -aa -m 1 -trace_msg -t un -skip_rlimit -trace_err
I have a docker heartbeat container up and running from where a connection should be made towards an ipv6 endpoint.
From in the heartbeat container the ping6 command doesn't succeed, from on the host it is working.
In container
sh-4.2$ ping6 ipv6.google.com
PING ipv6.google.com(ams15s32-in-x0e.1e100.net (2a00:1450:400e:809::200e)) 56 data bytes
^C
on vm
[root#myserver myuser]# ping6 ipv6.google.com
PING ipv6.google.com(ams15s30-in-x0e.1e100.net (2a00:1450:400e:807::200e)) 56 data bytes
64 bytes from ams15s30-in-x0e.1e100.net (2a00:1450:400e:807::200e): icmp_seq=1 ttl=120 time=6.55 ms
64 bytes from ams15s30-in-x0e.1e100.net (2a00:1450:400e:807::200e): icmp_seq=2 ttl=120 time=6.60 ms
I've configured the daemon.json file with the subnet and the docker-compose file takes care of the preparation of the ipv6 network
version: "2.2"
services:
heartbeat:
image: docker.elastic.co/beats/heartbeat:7.10.1
container_name: "heartbeat"
volumes:
- "./elastic/heartbeat.yml:/usr/share/heartbeat/heartbeat.yml:ro"
- "./elastic/monitor.d/:/usr/share/heartbeat/monitor.d/:ro"
networks:
- beats
networks:
beats:
enable_ipv6: true
driver: bridge
ipam:
driver: default
config:
- subnet: 2a02:1800:1e0:408f::806:0/112
- gateway: 2a02:1800:1e0:408f::806:1
The docker network ls shows the network correctly setup
docker network ls
NETWORK ID NAME DRIVER SCOPE
...
328408216a9f deployments_beats bridge local
...
And the bridged network is appearing in the ifconfig overview with following info
br-328408216a9f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.19.0.1 netmask 255.255.0.0 broadcast 172.19.255.255
inet6 2a02:1800:1e0:408f::806:1 prefixlen 112 scopeid 0x0<global>
inet6 fe80::1 prefixlen 64 scopeid 0x20<link>
inet6 fe80::42:52ff:fe98:e176 prefixlen 64 scopeid 0x20<link>
ether 02:42:52:98:e1:76 txqueuelen 0 (Ethernet)
RX packets 8 bytes 656 (656.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7 bytes 746 (746.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Anything I've missed during the setup?
You also need to enable ipv6 on the docker engine:
Edit /etc/docker/daemon.json, set the ipv6 key to true and the fixed-cidr-v6 key to your IPv6 subnet. In this example we are setting
it to 2001:db8:1::/64.
{
"ipv6": true,
"fixed-cidr-v6": "2001:db8:1::/64"
}
Save the file.
Reload the Docker configuration file.
$ systemctl reload docker
https://docs.docker.com/config/daemon/ipv6/
Solved by using https://github.com/robbertkl/docker-ipv6nat
Added the container to my docker setup
my daemon.json file in /etc/docker/
{
"ipv6": true,
"fixed-cidr-v6": "fd00::/64"
}
which will use the unique local subnet
in my docker-compose I create a ipv6 network
networks:
beats:
enable_ipv6: true
driver: bridge
ipam:
driver: default
config:
- subnet: fd00:1::/80
note the prefix 1 I'm using in the range
add your container to the network, and it works
I am currently using tinc to create a VPN between two servers. This allows me from server A to access B through the IP address 10.0.0.2 and creates an interface:
tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500
inet 10.0.0.1 netmask 255.255.255.0 destination 10.0.0.1
inet6 fe80::babb:cc53:dd5e:23f8 prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 42 bytes 11987 (11.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 55 bytes 7297 (7.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I would like to pass this route to my docker container on server A:
version: '3.2'
services:
traefik:
image: "traefik:v2.2.0"
ports:
- "80:80"
- "443:443"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./acme:/acme"
- "./traefik.toml:/traefik.toml"
- "./rules:/etc/traefik/rules"
networks:
- traefik
deploy:
placement:
constraints:
- node.role == manager
networks:
traefik:
external: true
Currently inside the container traefik I can ping 10.0.0.2 but it is a different host completely.
If I remove:
networks:
traefik:
external: true
and add network_mode: host within the traefik service I can route to 10.0.02 but then I cannot access other containers which share the traefik network.
If I try and put them both together I get the error:
'network_mode' and 'networks' cannot be combined
In other words how can I create the dashed line connection?
This also depicts my problem in that container B can't be in both networks at once.
I added Server A just as a more real world example of a swarm.
A solution I came up with was not use tinc at all and use autossh to effectively port forward with a command like this:
autossh -M 43585 -o "compression=no" -o "cipher=aes128-gcm#openssh.com" -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -NR 3000:localhost:3000 root#serverA
Ran on server B (first image).
This means that I can then route from the container to Server B via http://serverA:3000 for example.
I am trying to ping from a docker container running along with other services controlled with docker-compose.
Basic requirement is to connect to external db servers, which is not working. To debug I tried to ping the external box, which as expected don't return. I can however ping the external box from vm host.
The /etc/hosts has entry as I have provided following line in docker-compose.yml
extra_hosts:
- "externalhostname:10.40.154.27"
From docker inspect the following is the network details
"Networks": {
"echo_service_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"python-interpreter",
"3767f3a7ad80"
],
"NetworkID": "10ca2ec9a1dbc3659cef91014c2c64c8df17e9d720350d1bdd198a53c6c0a946",
"EndpointID": "c920443c34ff00ffefe2c669bc4b80e121c27d1b8ebc44fa9f5efb16e71561a4",
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.6",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:13:00:06",
"DriverOpts": null
}
ifconfig in host gives
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:b6ff:fe86:fb71 prefixlen 64 scopeid 0x20<link>
ether 02:42:b6:86:fb:71 txqueuelen 0 (Ethernet)
RX packets 204 bytes 90455 (88.3 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5080 bytes 459752 (448.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
The service entry from docker-compose.yml
python-interpreter:
image: image name
hostname: python-interpreter
volumes:
- /scratch/share:/var/python-interpreter/data:ro
- shared:/var/shared
extra_hosts:
- "externalhostname:10.40.154.27"
so the service is running with bridge network with no -p option
In the docker-compose.yml you shared, the python-interpreter service has a extra_hosts definition but not a network definition, you need to define a network, for example:
python-interpreter:
image: image name
hostname: python-interpreter
volumes:
- /scratch/share:/var/python-interpreter/data:ro
- shared:/var/shared
networks:
- backend
And define the network at the end of your docker-compose.yml :
networks:
backend:
driver: bridge
it seems it is working now..
i changed to host mode
python-interpreter:
image: ...
hostname: ...
**network_mode: host**
volumes:
...
extra_hosts:
- "whf00aqw.in.oracle.com:10.40.154.27"
I believe (not sure.)because it was going to bridge , it was restricting network access to within host. Now that i have made it host , it is able to share host network and able to access.