I have a multi-container application deployed on an EC2 instance via a single ECS task. When I try making an HTTP request to container-2 from container-1, I get error "Name or service not known."
I'm unable to reproduce this locally when I run with docker compose. I'm using the bridge network mode. I've SSH'd into the EC2 instance and can see that both containers are on the bridge network. (I've unsuccessfully tried awsvpc as well and that led to a different set of issues... so I'll save that for a separate post if necessary.)
Here's a snippet of my task-definition.json:
{
...
"containerDefinitions": [
{
"name": "container-1",
"image": "container-1",
"portMappings": [
{
"hostPort": 8081,
"containerPort": 8081,
"protocol": "tcp"
}
]
},
{
"name": "container-2",
"image": "container-2",
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080,
"protocol": "tcp"
}
]
}
],
"networkMode": "bridge",
...
}
EDIT1 - Adding some of my ifconfig, let me know if I need to add more.
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:a7ff:febd:55df prefixlen 64 scopeid 0x20<link>
ether 02:42:a7:bd:55:df txqueuelen 0 (Ethernet)
RX packets 842 bytes 55315 (54.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 614 bytes 78799 (76.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ecs-bridge: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 169.254.172.1 netmask 255.255.252.0 broadcast 0.0.0.0
inet6 fe80::c5a:1bff:fed4:525f prefixlen 64 scopeid 0x20<link>
ether 00:00:00:00:00:00 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 23 bytes 1890 (1.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 3760 bytes 274480 (268.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3760 bytes 274480 (268.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
EDIT2 - docker inspect bridge
[
{
"Name": "bridge",
"Id": "...",
"Created": "...",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "XXX",
"Gateway": "XXX"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"somehash": {
"Name": "container-1",
"EndpointID": "XXX",
"MacAddress": "XXX",
"IPv4Address": "XXX",
"IPv6Address": ""
},
"somehash": {
"Name": "container-2",
"EndpointID": "XXX",
"MacAddress": "XXX",
"IPv4Address": "XXX",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
To allow containers in a single task, in EC2 host networking mode, to communicate with each other you need to specify the links attribute to map containers to internal network names. This is documented here.
Related
I run two containers in compose mode. Containers can communicate with each other by IP, but not by container name - it is not resolved.
Docker network inspect where containers are ran
docker inspect mycompose
[
{
"Name": "mycompose",
"Id": "5d6f614b1a67efa38143adf745700cac103be07f74bcb219fd547aa8ce8abd1e",
"Created": "2019-11-07T17:08:49.940162+01:00",
"Scope": "local",
"Driver": "nat",
"EnableIPv6": false,
"IPAM": {
"Driver": "windows",
"Options": null,
"Config": [
{
"Subnet": "172.22.144.0/20",
"Gateway": "172.22.144.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"876efda0f26487b4c56dd52485e6f76ecc5c97214ae99cb0760a5cad4c65ea74": {
"Name": "mycompose_brim_1",
"EndpointID": "15a3049b1f79d5da3129030574fd16dc46ced5246f15fe00b08b778a2b8ab8ef",
"MacAddress": "00:15:5d:57:23:87",
"IPv4Address": "172.22.144.113/16",
"IPv6Address": ""
},
"b8c596491ae84a1da8d597ea6ab6edf5872405856520e46d8f35581f48314b5f": {
"Name": "mycompose_brimdb_1",
"EndpointID": "8bbf233cfeb57729570f581dd34f6677a6a6655dd64c2090dacc26b91604eb7c",
"MacAddress": "00:15:5d:57:25:be",
"IPv4Address": "172.25.113.185/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.windowsshim.hnsid": "48240B53-F49A-43B9-9A20-113C047D65A1"
},
"Labels": {
"com.docker.compose.network": "v2",
"com.docker.compose.project": "mycompose",
"com.docker.compose.version": "1.24.1"
}
}
]
Trying to ping one container within another by container name:
PS C:\inetpub\wwwroot> ping mycompose_brimdb_1
Ping request could not find host mycompose_brimdb_1. Please check the name and try again
Trying to ping by service name:
PS C:\inetpub\wwwroot> ping brimdb
Ping request could not find host brimdb. Please check the name and try again.
Trying to ping same container by IP:
PS C:\inetpub\wwwroot> ping 172.25.113.185
Pinging 172.25.113.185 with 32 bytes of data:
Reply from 172.25.113.185: bytes=32 time=1ms TTL=128
Reply from 172.25.113.185: bytes=32 time<1ms TTL=128
Reply from 172.25.113.185: bytes=32 time=5ms TTL=128
Reply from 172.25.113.185: bytes=32 time<1ms TTL=128
Ping statistics for 172.25.113.185:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 5ms, Average = 1ms
compose file:
version: "3.7"
services:
brim:
image: brim:latest
ports:
- target: 50893
published: 50893
protocol: tcp
brimdb:
image: brimdb:latest
ports:
- target: 1555
published: 1555
protocol: tcp
I've got stack with some containers. One of them can't be reached by others by his hostname and it's seems to be an ip address problem.
docker network inspect mystack
"Name": "mystack_default",
"Id": "k9tanhwcyv42473ehsehqhqp7",
"Created": "2019-08-22T16:10:45.097992076+02:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.2.0/24",
"Gateway": "10.0.2.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"5d8e4b8cba8889036a869a280d5996f104f250677b8c962dc45ba72441e1840d": {
"Name": "mystack_api.1.t4oax9f5uyw0237h2ysgwrzxq",
"EndpointID": "34037c244f828e035c54b5ef3f21f020cf046218b39ffc2835dd4156f3d2b688",
"MacAddress": "02:42:0a:00:02:23",
"IPv4Address": "10.0.2.35/24",
"IPv6Address": ""
},
"49f6a8444475fdcea2f96bdb7fbc62b908b5cd83175c3068a675761e64500e0e": {
"Name": "mystack_webview.1.biby87oba9z3awkb3n4439yho",
"EndpointID": "d9c0551a0213e38651c352970d5970b3f80b067676b3fb959845e139b7261c1a",
"MacAddress": "02:42:0a:00:02:20",
"IPv4Address": "10.0.2.32/24",
"IPv6Address": ""
},
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4099"
},
"Labels": {
"com.docker.stack.namespace": "mystack"
},
"Peers": [
{
"Name": "1b4f79e8e881",
"IP": "192.168.1.67"
}
]
api service can ping webview service using hostname but it's not the good ip of my webview service :
# ping webview
PING webview (10.0.2.17) 56(84) bytes of data. // NOT THE GOOD IP ! (it should be 10.0.2.32)
64 bytes from 10.0.2.17 (10.0.2.17): icmp_seq=1 ttl=64 time=0.126 ms
64 bytes from 10.0.2.17 (10.0.2.17): icmp_seq=2 ttl=64 time=0.099 ms
webview can't ping api service using hostname (bad address error) but it works with ip address of my service :
/app # ping 10.0.2.35
PING 10.0.2.35 (10.0.2.35): 56 data bytes
64 bytes from 10.0.2.35: seq=0 ttl=64 time=0.331 ms
64 bytes from 10.0.2.35: seq=1 ttl=64 time=0.140 ms
^C
--- 10.0.2.35 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.140/0.235/0.331 ms
/app # ping api
ping: bad address 'api'
There is a problem with docker network but I don't know how to solve it. I already uninstall and reinstall docker, remove docker eth entries... Any idea ? Thank you very much for your help !
I am running docker on my physic fedora machine.I started a prometheus container with host network, I think I should be able to access prometheus web pages from other host in the same local network, but failed.
My machina's ip is 172.31.209.112, the host I want to access prometheus is 172.31.1.195 on which I can ping 172.31.209.112. I can also confirm firewall is not running on both sides.
To access the LAN network I need to install a security software, which only has windows version, so I installed a windows vm with network set to NAT to cheat the security software, but I don't think this have something to do with my docker accessing problem.
I can access prometheus with localhost:9090 or 172.31.209.112:9090 locally, but just can't access it from any hosts in LAN, please help.
here is my docker container config:
{
"Id": "88b4c38e6a659754e861976b6b8b11d1dff495db1ef3d572169065a2e0acf4f6",
"Created": "2019-07-03T10:06:59.61293669Z",
"Path": "/bin/prometheus",
"Args": [
"--config.file=/etc/prometheus/prometheus.yml",
"--storage.tsdb.path=/prometheus",
"--web.console.libraries=/usr/share/prometheus/console_libraries",
"--web.console.templates=/usr/share/prometheus/consoles"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 5687,
"ExitCode": 0,
"Error": "",
"StartedAt": "2019-07-03T10:06:59.836329996Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:5d62a6125e7e0121151c24f6ec132bfae02cc37a2b57a666fd0569bed66d498f",
"ResolvConfPath": "/var/lib/docker/containers/88b4c38e6a659754e861976b6b8b11d1dff495db1ef3d572169065a2e0acf4f6/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/88b4c38e6a659754e861976b6b8b11d1dff495db1ef3d572169065a2e0acf4f6/hostname",
"HostsPath": "/var/lib/docker/containers/88b4c38e6a659754e861976b6b8b11d1dff495db1ef3d572169065a2e0acf4f6/hosts",
"LogPath": "/var/lib/docker/containers/88b4c38e6a659754e861976b6b8b11d1dff495db1ef3d572169065a2e0acf4f6/88b4c38e6a659754e861976b6b8b11d1dff495db1ef3d572169065a2e0acf4f6-json.log",
"Name": "/elastic_kapitsa",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/home/ggfan/4-tmp/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "host",
"PortBindings": {},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/d1e89e203e0a0791e370911f07f112a56603be51af65c29fc679b0c78350d4bf-init/diff:/var/lib/docker/overlay2/64e3d0a748213274b8210bde9f849e09e40aa59214cec724f0ef3167ed0d8030/diff:/var/lib/docker/overlay2/197e352b2464399aef2da8a75f31cb78179c3ad7b7e03b517176694fca19e7bc/diff:/var/lib/docker/overlay2/2a601b39acac1f8054c28af2d68004611ee17417c82d9c12842fc1b4725d18f5/diff:/var/lib/docker/overlay2/3590491c1df9eb9d1f4272abad88a6ad44f7f7e169129b8a6bdbba8a02fca09f/diff:/var/lib/docker/overlay2/0b1a049f0c204896187f0a3050a43eeb7d226fd0bfa810149f689039215d0068/diff:/var/lib/docker/overlay2/8c3523f1a5d77616be86ca76850c2cb682728c9d349032c0809781e3a9f24839/diff:/var/lib/docker/overlay2/b95196ec6ef2ed89345e6ecda9c476a30266a14097ca3931169159e01d5610b8/diff:/var/lib/docker/overlay2/56f16ca07e36898774523cb9d17bb5fc4d9c6526579bc3d764d257c41970e5bb/diff:/var/lib/docker/overlay2/749405b668264446ef9ec29a24089b5ce7dc7b9363033afab5dfa765ad1980e6/diff",
"MergedDir": "/var/lib/docker/overlay2/d1e89e203e0a0791e370911f07f112a56603be51af65c29fc679b0c78350d4bf/merged",
"UpperDir": "/var/lib/docker/overlay2/d1e89e203e0a0791e370911f07f112a56603be51af65c29fc679b0c78350d4bf/diff",
"WorkDir": "/var/lib/docker/overlay2/d1e89e203e0a0791e370911f07f112a56603be51af65c29fc679b0c78350d4bf/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/home/ggfan/4-tmp/prometheus/prometheus.yml",
"Destination": "/etc/prometheus/prometheus.yml",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "833a1e3a523e0e3061f937ef9868413342ba9277c079b8f3bab5d363a884aa44",
"Source": "/var/lib/docker/volumes/833a1e3a523e0e3061f937ef9868413342ba9277c079b8f3bab5d363a884aa44/_data",
"Destination": "/prometheus",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "localhost.localdomain",
"Domainname": "",
"User": "nobody",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"9090/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"--config.file=/etc/prometheus/prometheus.yml",
"--storage.tsdb.path=/prometheus",
"--web.console.libraries=/usr/share/prometheus/console_libraries",
"--web.console.templates=/usr/share/prometheus/consoles"
],
"ArgsEscaped": true,
"Image": "prom/prometheus",
"Volumes": {
"/prometheus": {}
},
"WorkingDir": "/prometheus",
"Entrypoint": [
"/bin/prometheus"
],
"OnBuild": null,
"Labels": {
"maintainer": "The Prometheus Authors <prometheus-developers#googlegroups.com>"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "5029bdc13e1f15aebb36e2d5a84c6d34f2d078360705a577946f92268b59a37e",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/default",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"host": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "d58749e2d0012b4e59f12097a590b4814258fad4a99252a21df1c2863f5ca03b",
"EndpointID": "7238756953fc7ea91d9364c17a7730658f2057b583508bd1ea8ebcbffd1f6ab5",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"DriverOpts": null
}
}
}
}
my network:
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:10ff:fe57:b91 prefixlen 64 scopeid 0x20<link>
ether 02:42:10:57:0b:91 txqueuelen 0 (Ethernet)
RX packets 33469 bytes 5113347 (4.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 40202 bytes 4779129 (4.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s31f6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.31.209.112 netmask 255.255.255.0 broadcast 172.31.209.255
inet6 fe80::7ab9:b4e3:e089:36e3 prefixlen 64 scopeid 0x20<link>
ether 18:66:da:45:1f:63 txqueuelen 1000 (Ethernet)
RX packets 2448716 bytes 386681103 (368.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 761559 bytes 192020136 (183.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 16 memory 0xf7180000-f71a0000
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 256361 bytes 332219233 (316.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 256361 bytes 332219233 (316.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth19c1c5e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::e80f:ccff:fefb:f2a1 prefixlen 64 scopeid 0x20<link>
ether ea:0f:cc:fb:f2:a1 txqueuelen 0 (Ethernet)
RX packets 83 bytes 285072 (278.3 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5896 bytes 1195103 (1.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth4586d02: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::24c6:88ff:fe1b:41ea prefixlen 64 scopeid 0x20<link>
ether 26:c6:88:1b:41:ea txqueuelen 0 (Ethernet)
RX packets 39 bytes 2718 (2.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6957 bytes 1410747 (1.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:2c:67:b9 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0-nic: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 52:54:00:2c:67:b9 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vmnet1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 169.254.131.1 netmask 255.255.0.0 broadcast 169.254.255.255
inet6 fe80::eb22:59d6:882e:9200 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:c0:00:01 txqueuelen 1000 (Ethernet)
RX packets 14178 bytes 0 (0.0 B)
RX errors 0 dropped 9 overruns 0 frame 0
TX packets 6603 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
and my routes:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway 0.0.0.0 UG 100 0 0 enp0s31f6
link-local 0.0.0.0 255.255.0.0 U 101 0 0 vmnet1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.31.0.0 _gateway 255.255.0.0 UG 100 0 0 enp0s31f6
172.31.209.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s31f6
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
224.0.0.0 0.0.0.0 240.0.0.0 U 101 0 0 vmnet1
I want my docker0 and all containers to have the same gateway address or be in the same IPrange as my local machine. I started by defining a fixed-cidr in ther daemon.json file /etc/docker/daemon.json
{
"bip": "10.80.44.248/24",
"fixed-cidr": "10.80.44.250/25",
"mtu": 1500,
"default-gateway": "10.80.44.254",
"dns": ["10.80.41.14"]
}
It seems to be working looking at the output of the ip -a
It seems the docker0 has never received any data since.
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet *10.80.44.248* netmask 255.255.255.0 broadcast *10.80.44.255*
ether 02:42:9c:b9:e1:63 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet *10.80.44.39* netmask 255.255.255.0 broadcast *10.80.44.255*
inet6 fe80::250:56ff:feb1:79e4 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:b1:79:e4 txqueuelen 1000 (Ethernet)
RX packets 211061 bytes 30426474 (29.0 MiB)
RX errors 0 dropped 33861 overruns 0 frame 0
TX packets 3032 bytes 260143 (254.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
The local machine and the docker0 are in same IP range with the same gateway. Good.
But when I start the docker containers and inspected the bridge settings Everything was different. This is the output of
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "b326a37a589245449e1268bbb9ee65262eb7986574c0e972c56d350aa82d7238",
"Created": "2018-04-04T03:25:52.00544539+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.80.44.248/24",
"IPRange": "10.80.44.128/25",
"Gateway": "10.80.44.248",
"AuxiliaryAddresses": {
"DefaultGatewayIPv4": "10.80.44.254"
}
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
I don't understand why now IPAM config is having a IPv4 als auxiliary
"AuxiliaryAddresses": {
"DefaultGatewayIPv4": "10.80.44.254"
}
I realised that now the bridge is not created from the same subnet as it was configured by the daemon I it created 2 different bridges with different IP ranges. That is still the default from docker.
docker network ls
NETWORK ID NAME DRIVER SCOPE
b326a37a5892 bridge bridge local
6ce11066cdea dockergitlab_default bridge local
d5a36c04b809 host host local
15f66b88ee67 none null local
docker network inspect dockergitlab_default
[
{
"Name": "dockergitlab_default",
"Id": "6ce11066cdeabf3cfe65b2dff22046bd1e9c18d2588f47b9cd3c52ea24f7a636",
"Created": "2018-03-14T08:56:23.351051727+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"60f769c91cff1de47794a7c8b587b778488883da094ae32cfde5196ee0f528f1": {
"Name": "gitlab-runner",
"EndpointID": "5122fe862537fb8434a484b4797153274b945e20bc3c7223efc6fd0bd55eae14",
"MacAddress": "02:42:ac:11:00:04",
"IPv4Address": "172.17.0.4/16",
"IPv6Address": ""
},
"9c46e1fde6390142bddf67270cfeda7b3e68b1a6e68cabc334046db687240a8d": {
"Name": "dockergitlab_postgresql_1",
"EndpointID": "8488b32cc34a2c92308528de74b5eddcecac12a402ee6e67c1ef0f2750b72721",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"eaf29f5f405cbf9bdd918efad26ceae1a8c3f58f4bef0aa8fd86b4631bcfdf43": {
"Name": "dockergitlab_gitlab_1",
"EndpointID": "d7f78ee9bd51dd13826d7834470d03a9084fc7ab8c6567c0181acecc221628c6",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"f460687ec00eff214fa08adfe9a0af5b85c392ceb470c4ed630ef7ecb0bfcba1": {
"Name": "dockergitlab_redis_1",
"EndpointID": "8b18906f1c79a5faaadd32afdef20473f9b635e9a1cd2c7108dd98df48eaed86",
"MacAddress": "02:42:ac:11:00:05",
"IPv4Address": "172.17.0.5/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "dockergitlab"
}
}
]
I have no idea why the docker bridge is now created with the old default ip address.
LOCAL SYSTEM Details
I can do apt update on the local machine but when i logged into the gitlab-runner i can't do apt update
Linux 4.9.0-6-amd64 #1 SMP Debian 4.9.82-1+deb9u3 (2018-03-02) x86_64
Docker version 17.12.0-ce, build c97c6d6
docker-compose version 1.18.0, build 8dd22a9
Is there a way I can oveeride the bridge settings. From what i have read, when I define/configure the cdir and gateway in daemon.json file everything will be taken from there for the creation of the bridge network and all other containers.
Thanks in Advance for your help.
First of all you've correctly configured the docker0 bridge and starting containers with the plain docker run command should connect them to the bridge and give them IPs in 10.80.44.250/25.
From what you've pasted I guess you're using docker-compose to start your containers.
docker-compose will create a myproject_default network per docker-compose.yml if you don't specify anything.
Today you cannot choose in which pool the IP ranges will be chosen, it's by default a 172.[17-31].0.0/16. There is currently an active pull request to allow override of this behaviour : https://github.com/moby/moby/pull/36396.
If you want to manually specify the IP range in your docker-compose.yml you can write this :
networks:
default:
ipam:
config:
- subnet: 10.80.44.250/25
Edit : this is only compatible with a docker-compose syntax >=3.0.
I have a docker swarm of a single node. I've deployed a image registry as a service:
docker service create \
--name image-registry \
--hostname image-registry.localdomain.local \
--secret image-registry.crt \
--secret image-registry.key \
--constraint 'node.labels.registry==true' \
--mount type=bind,src=/var/image-registry/,dst=/var/lib/registry \
-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/run/secrets/image-registry.crt \
-e REGISTRY_HTTP_TLS_KEY=/run/secrets/image-registry.key \
--publish published=443,target=443 \
--replicas 1 \
registry:2
The service seems healthy
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ywt51zvik09s image-registry replicated 1/1 registry:2 *:443->443/tcp
I inspect the service to find the virtual IP
$ docker service inspect image-registry
[
{
"ID": "ywt51zvik09szz2jl9xgxbj8i",
"Version": {
"Index": 54378
},
"CreatedAt": "2017-11-29T02:01:04.063664587Z",
"UpdatedAt": "2017-11-29T02:01:04.065183181Z",
"Spec": {
"Name": "image-registry",
"Labels": {},
"TaskTemplate": {
"ContainerSpec": {
"Image": "registry:2#sha256:d837de65fd9bdb81d74055f1dc9cc9154ad5d8d5328f42f57f273000c402c76d",
"Hostname": "image-registry.localdomain.local",
"Env": [
"REGISTRY_HTTP_ADDR=0.0.0.0:443",
"REGISTRY_HTTP_TLS_CERTIFICATE=/run/secrets/image-registry.crt",
"REGISTRY_HTTP_TLS_KEY=/run/secrets/image-registry.key"
],
"Mounts": [
{
"Type": "bind",
"Source": "/var/image-registry/",
"Target": "/var/lib/registry"
}
],
"StopGracePeriod": 10000000000,
"DNSConfig": {},
"Secrets": [
{
"File": {
"Name": "image-registry.crt",
"UID": "0",
"GID": "0",
"Mode": 292
},
"SecretID": "t88ee92s2sax4ewihbbrmwwyw",
"SecretName": "image-registry.crt"
},
{
"File": {
"Name": "image-registry.key",
"UID": "0",
"GID": "0",
"Mode": 292
},
"SecretID": "srsaybf31lqpl942rfmlndm4h",
"SecretName": "image-registry.key"
}
]
},
"Resources": {
"Limits": {},
"Reservations": {}
},
"RestartPolicy": {
"Condition": "any",
"Delay": 5000000000,
"MaxAttempts": 0
},
"Placement": {
"Constraints": [
"node.labels.registry==true"
],
"Platforms": [
{
"Architecture": "amd64",
"OS": "linux"
}
]
},
"ForceUpdate": 0,
"Runtime": "container"
},
"Mode": {
"Replicated": {
"Replicas": 1
}
},
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"RollbackConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 443,
"PublishedPort": 443,
"PublishMode": "ingress"
}
]
}
},
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 443,
"PublishedPort": 443,
"PublishMode": "ingress"
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 443,
"PublishedPort": 443,
"PublishMode": "ingress"
}
],
"VirtualIPs": [
{
"NetworkID": "d5pvc254jq5e1n0e16v8ecp1j",
"Addr": "10.255.0.3/16"
}
]
}
}
]
But when I try to ping from the host the virtual IP I get:
ping 10.255.0.3
PING 10.255.0.3 (10.255.0.3) 56(84) bytes of data.
From 65.12.13.1 icmp_seq=1 Destination Host Unreachable
From 65.12.13.1 icmp_seq=2 Destination Host Unreachable
From 65.12.13.1 icmp_seq=3 Destination Host Unreachable
From 65.12.13.1 icmp_seq=4 Destination Host Unreachable
When I do ifconfig I don't see any of these networks:
$ ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:47:e7:22:43
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
docker_gwbridge Link encap:Ethernet HWaddr 02:42:ac:b9:0c:1c
inet addr:172.18.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:feb9:c1c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:91 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:9348 (9.3 KB)
enp3s0 Link encap:Ethernet HWaddr 1c:1b:0d:7e:ad:b2
inet addr:192.168.1.148 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fdfb:4eb5:df66:0:e0c0:4e3:83d2:63de/64 Scope:Global
inet6 addr: fe80::66e0:994a:2ae7:8180/64 Scope:Link
inet6 addr: fdfb:4eb5:df66:0:986b:be9b:687a:48d0/64 Scope:Global
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:993615 errors:0 dropped:0 overruns:0 frame:0
TX packets:617970 errors:6 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1333226168 (1.3 GB) TX bytes:55076679 (55.0 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:165431 errors:0 dropped:0 overruns:0 frame:0
TX packets:165431 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:25958351 (25.9 MB) TX bytes:25958351 (25.9 MB)
veth4bd29fc Link encap:Ethernet HWaddr c2:ef:1c:ba:6e:f3
inet6 addr: fe80::c0ef:1cff:feba:6ef3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:82 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:8059 (8.0 KB)
vethb2889ca Link encap:Ethernet HWaddr c2:9d:1a:df:8f:a8
inet6 addr: fe80::c09d:1aff:fedf:8fa8/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:150 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:15411 (15.4 KB)
Any idea what is going on here?
You're inspecting the service, rather than the container. Beginner mistake :-) Docker has a lot of "inspect" commands that can be used in various areas.
You want to inspect the container, which is either:
docker inspect [container_id] or
docker container inspect [container_id]
Either works; however the new way is the second option; as the number of subcommands has grown - Docker began splitting them up.
Note You must use the container ID - not the service ID! Find this via a docker ps.
As an example:
➜ ~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
28409910f4b2 nginx "nginx -g 'daemon ..." 47 hours ago Up 47 hours 80/tcp lucid_feynman
➜ ~ docker inspect --format '{{.NetworkSettings.IPAddress}}' 28409910f4b2
172.17.0.2