Docker swarm restart all containers on host periodically [closed] - docker

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
I have next dmesg log
[Fri Jan 17 07:22:25 2020] [UFW BLOCK] IN=enp6s0 OUT= MAC=00:25:90:66:ab:2c:cc:4e:24:f9:de:60:08:00 SRC=185.176.27.162 DST=91.237.249.65 LEN=40 TOS=0x00 PREC=0x00 TTL=246 ID=34473 PROTO=TCP SPT=42928 DPT=4443 WINDOW=1024 RES=0x00 SYN URGP=0
[Fri Jan 17 07:22:44 2020] veth13: renamed from vethdc65e40
[Fri Jan 17 07:22:44 2020] br0: port 3(veth13) entered blocking state
[Fri Jan 17 07:22:44 2020] br0: port 3(veth13) entered disabled state
[Fri Jan 17 07:22:44 2020] device veth13 entered promiscuous mode
[Fri Jan 17 07:22:44 2020] veth14: renamed from vethc274e51
[Fri Jan 17 07:22:44 2020] br0: port 4(veth14) entered blocking state
[Fri Jan 17 07:22:44 2020] br0: port 4(veth14) entered disabled state
[Fri Jan 17 07:22:44 2020] device veth14 entered promiscuous mode
[Fri Jan 17 07:22:44 2020] br0: port 4(veth14) entered blocking state
[Fri Jan 17 07:22:44 2020] br0: port 4(veth14) entered forwarding state
[Fri Jan 17 07:22:44 2020] veth15: renamed from vethade6b91
[Fri Jan 17 07:22:44 2020] br0: port 5(veth15) entered blocking state
[Fri Jan 17 07:22:44 2020] br0: port 5(veth15) entered disabled state
[Fri Jan 17 07:22:44 2020] device veth15 entered promiscuous mode
[Fri Jan 17 07:22:44 2020] br0: port 5(veth15) entered blocking state
[Fri Jan 17 07:22:44 2020] br0: port 5(veth15) entered forwarding state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 5(veth39f34fb) entered blocking state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 5(veth39f34fb) entered disabled state
[Fri Jan 17 07:22:44 2020] device veth39f34fb entered promiscuous mode
[Fri Jan 17 07:22:44 2020] IPv6: ADDRCONF(NETDEV_UP): veth39f34fb: link is not ready
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 5(veth39f34fb) entered blocking state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 5(veth39f34fb) entered forwarding state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 9(veth2dd14ef) entered blocking state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 9(veth2dd14ef) entered disabled state
[Fri Jan 17 07:22:44 2020] device veth2dd14ef entered promiscuous mode
[Fri Jan 17 07:22:44 2020] IPv6: ADDRCONF(NETDEV_UP): veth2dd14ef: link is not ready
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 9(veth2dd14ef) entered blocking state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 9(veth2dd14ef) entered forwarding state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 10(veth253201d) entered blocking state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 10(veth253201d) entered disabled state
[Fri Jan 17 07:22:44 2020] device veth253201d entered promiscuous mode
[Fri Jan 17 07:22:44 2020] IPv6: ADDRCONF(NETDEV_UP): veth253201d: link is not ready
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 10(veth253201d) entered blocking state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 10(veth253201d) entered forwarding state
[Fri Jan 17 07:22:45 2020] br0: port 4(veth14) entered disabled state
[Fri Jan 17 07:22:45 2020] br0: port 5(veth15) entered disabled state
[Fri Jan 17 07:22:45 2020] docker_gwbridge: port 5(veth39f34fb) entered disabled state
[Fri Jan 17 07:22:45 2020] docker_gwbridge: port 9(veth2dd14ef) entered disabled state
[Fri Jan 17 07:22:45 2020] docker_gwbridge: port 10(veth253201d) entered disabled state
[Fri Jan 17 07:22:45 2020] eth0: renamed from veth9c853c4
[Fri Jan 17 07:22:45 2020] br0: port 3(veth13) entered blocking state
[Fri Jan 17 07:22:45 2020] br0: port 3(veth13) entered forwarding state
[Fri Jan 17 07:22:45 2020] eth1: renamed from veth38928ee
[Fri Jan 17 07:22:46 2020] IPv6: ADDRCONF(NETDEV_CHANGE): veth39f34fb: link becomes ready
[Fri Jan 17 07:22:46 2020] docker_gwbridge: port 5(veth39f34fb) entered blocking state
[Fri Jan 17 07:22:46 2020] docker_gwbridge: port 5(veth39f34fb) entered forwarding state
[Fri Jan 17 07:22:46 2020] eth0: renamed from veth0a34354
[Fri Jan 17 07:22:46 2020] br0: port 4(veth14) entered blocking state
[Fri Jan 17 07:22:46 2020] br0: port 4(veth14) entered forwarding state
[Fri Jan 17 07:22:46 2020] eth1: renamed from veth3673041
[Fri Jan 17 07:22:46 2020] IPv6: ADDRCONF(NETDEV_CHANGE): veth2dd14ef: link becomes ready
[Fri Jan 17 07:22:46 2020] docker_gwbridge: port 9(veth2dd14ef) entered blocking state
[Fri Jan 17 07:22:46 2020] docker_gwbridge: port 9(veth2dd14ef) entered forwarding state
[Fri Jan 17 07:22:47 2020] br0: port 7(veth11) entered disabled state
[Fri Jan 17 07:22:47 2020] veth1545e82: renamed from eth0
[Fri Jan 17 07:22:47 2020] br0: port 8(veth12) entered disabled state
[Fri Jan 17 07:22:47 2020] vethb08fde7: renamed from eth0
[Fri Jan 17 07:22:47 2020] docker_gwbridge: port 6(vethccff74f) entered disabled state
[Fri Jan 17 07:22:47 2020] veth590e84b: renamed from eth1
[Fri Jan 17 07:22:47 2020] vethc6b5132: renamed from eth1
[Fri Jan 17 07:22:47 2020] docker_gwbridge: port 11(veth36b424b) entered disabled state
[Fri Jan 17 07:22:47 2020] docker_gwbridge: port 6(vethccff74f) entered disabled state
[Fri Jan 17 07:22:47 2020] device vethccff74f left promiscuous mode
[Fri Jan 17 07:22:47 2020] docker_gwbridge: port 6(vethccff74f) entered disabled state
[Fri Jan 17 07:22:47 2020] eth0: renamed from vetha1f2897
[Fri Jan 17 07:22:47 2020] docker_gwbridge: port 11(veth36b424b) entered disabled state
[Fri Jan 17 07:22:47 2020] device veth36b424b left promiscuous mode
[Fri Jan 17 07:22:47 2020] docker_gwbridge: port 11(veth36b424b) entered disabled state
[Fri Jan 17 07:22:47 2020] br0: port 5(veth15) entered blocking state
[Fri Jan 17 07:22:47 2020] br0: port 5(veth15) entered forwarding state
[Fri Jan 17 07:22:47 2020] br0: port 7(veth11) entered disabled state
[Fri Jan 17 07:22:47 2020] device veth11 left promiscuous mode
[Fri Jan 17 07:22:47 2020] br0: port 7(veth11) entered disabled state
[Fri Jan 17 07:22:48 2020] br0: port 8(veth12) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth12 left promiscuous mode
[Fri Jan 17 07:22:48 2020] br0: port 8(veth12) entered disabled state
[Fri Jan 17 07:22:48 2020] eth1: renamed from veth56c6992
[Fri Jan 17 07:22:48 2020] IPv6: ADDRCONF(NETDEV_CHANGE): veth253201d: link becomes ready
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 10(veth253201d) entered blocking state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 10(veth253201d) entered forwarding state
[Fri Jan 17 07:22:48 2020] br0: port 5(veth462) entered disabled state
[Fri Jan 17 07:22:48 2020] vethd306c3b: renamed from eth0
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth7911ac5) entered disabled state
[Fri Jan 17 07:22:48 2020] veth41eb27e: renamed from eth1
[Fri Jan 17 07:22:48 2020] veth467: renamed from veth780af70
[Fri Jan 17 07:22:48 2020] br0: port 4(veth467) entered blocking state
[Fri Jan 17 07:22:48 2020] br0: port 4(veth467) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth467 entered promiscuous mode
[Fri Jan 17 07:22:48 2020] veth22: renamed from veth23b3177
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth7911ac5) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth7911ac5 left promiscuous mode
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth7911ac5) entered disabled state
[Fri Jan 17 07:22:48 2020] br0: port 5(veth22) entered blocking state
[Fri Jan 17 07:22:48 2020] br0: port 5(veth22) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth22 entered promiscuous mode
[Fri Jan 17 07:22:48 2020] veth466: renamed from vethc0be03e
[Fri Jan 17 07:22:48 2020] br0: port 6(veth466) entered blocking state
[Fri Jan 17 07:22:48 2020] br0: port 6(veth466) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth466 entered promiscuous mode
[Fri Jan 17 07:22:48 2020] veth21: renamed from vethd3d6c20
[Fri Jan 17 07:22:48 2020] br0: port 6(veth21) entered blocking state
[Fri Jan 17 07:22:48 2020] br0: port 6(veth21) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth21 entered promiscuous mode
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 6(veth2ec5aab) entered blocking state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 6(veth2ec5aab) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth2ec5aab entered promiscuous mode
[Fri Jan 17 07:22:48 2020] IPv6: ADDRCONF(NETDEV_UP): veth2ec5aab: link is not ready
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 6(veth2ec5aab) entered blocking state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 6(veth2ec5aab) entered forwarding state
[Fri Jan 17 07:22:48 2020] br0: port 5(veth462) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth462 left promiscuous mode
[Fri Jan 17 07:22:48 2020] br0: port 5(veth462) entered disabled state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 6(veth2ec5aab) entered disabled state
[Fri Jan 17 07:22:48 2020] br0: port 6(veth10) entered disabled state
[Fri Jan 17 07:22:48 2020] vethcc03a84: renamed from eth0
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth69d8ae6) entered blocking state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth69d8ae6) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth69d8ae6 entered promiscuous mode
[Fri Jan 17 07:22:48 2020] IPv6: ADDRCONF(NETDEV_UP): veth69d8ae6: link is not ready
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth69d8ae6) entered blocking state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth69d8ae6) entered forwarding state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 11(veth5297c44) entered blocking state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 11(veth5297c44) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth5297c44 entered promiscuous mode
[Fri Jan 17 07:22:48 2020] IPv6: ADDRCONF(NETDEV_UP): veth5297c44: link is not ready
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 11(veth5297c44) entered blocking state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 11(veth5297c44) entered forwarding state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth69d8ae6) entered disabled state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 11(veth5297c44) entered disabled state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 3(veth122e42b) entered disabled state
[Fri Jan 17 07:22:48 2020] vethe230f27: renamed from eth1
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 12(vetha0b790f) entered blocking state
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 12(vetha0b790f) entered disabled state
[Fri Jan 17 07:22:49 2020] device vetha0b790f entered promiscuous mode
[Fri Jan 17 07:22:49 2020] IPv6: ADDRCONF(NETDEV_UP): vetha0b790f: link is not ready
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 12(vetha0b790f) entered blocking state
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 12(vetha0b790f) entered forwarding state
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 3(veth122e42b) entered disabled state
[Fri Jan 17 07:22:49 2020] device veth122e42b left promiscuous mode
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 3(veth122e42b) entered disabled state
[Fri Jan 17 07:22:49 2020] br0: port 6(veth10) entered disabled state
[Fri Jan 17 07:22:49 2020] device veth10 left promiscuous mode
[Fri Jan 17 07:22:49 2020] br0: port 6(veth10) entered disabled state
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 12(vetha0b790f) entered disabled state
[Fri Jan 17 07:22:49 2020] eth0: renamed from vethea5f680
[Fri Jan 17 07:22:49 2020] br0: port 5(veth22) entered blocking state
[Fri Jan 17 07:22:49 2020] br0: port 5(veth22) entered forwarding state
[Fri Jan 17 07:22:49 2020] eth1: renamed from veth5f8bdfe
[Fri Jan 17 07:22:49 2020] IPv6: ADDRCONF(NETDEV_CHANGE): veth2ec5aab: link becomes ready
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 6(veth2ec5aab) entered blocking state
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 6(veth2ec5aab) entered forwarding state
[Fri Jan 17 07:22:50 2020] eth0: renamed from vethb6652df
[Fri Jan 17 07:22:50 2020] br0: port 4(veth467) entered blocking state
[Fri Jan 17 07:22:50 2020] br0: port 4(veth467) entered forwarding state
[Fri Jan 17 07:22:50 2020] eth1: renamed from vethcaadc3f
[Fri Jan 17 07:22:50 2020] IPv6: ADDRCONF(NETDEV_CHANGE): veth5297c44: link becomes ready
[Fri Jan 17 07:22:50 2020] docker_gwbridge: port 11(veth5297c44) entered blocking state
[Fri Jan 17 07:22:50 2020] docker_gwbridge: port 11(veth5297c44) entered forwarding state
[Fri Jan 17 07:22:51 2020] eth0: renamed from vethb86f26f
[Fri Jan 17 07:22:51 2020] br0: port 6(veth21) entered blocking state
[Fri Jan 17 07:22:51 2020] br0: port 6(veth21) entered forwarding state
[Fri Jan 17 07:22:51 2020] eth1: renamed from vethf0973ce
[Fri Jan 17 07:22:51 2020] IPv6: ADDRCONF(NETDEV_CHANGE): vetha0b790f: link becomes ready
[Fri Jan 17 07:22:51 2020] docker_gwbridge: port 12(vetha0b790f) entered blocking state
[Fri Jan 17 07:22:51 2020] docker_gwbridge: port 12(vetha0b790f) entered forwarding state
[Fri Jan 17 07:22:52 2020] br0: port 4(veth19) entered disabled state
[Fri Jan 17 07:22:52 2020] veth16921ae: renamed from eth0
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 7(vethd0e89d9) entered disabled state
[Fri Jan 17 07:22:52 2020] veth523fe30: renamed from eth1
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 7(vethd0e89d9) entered disabled state
[Fri Jan 17 07:22:52 2020] device vethd0e89d9 left promiscuous mode
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 7(vethd0e89d9) entered disabled state
[Fri Jan 17 07:22:52 2020] br0: port 4(veth19) entered disabled state
[Fri Jan 17 07:22:52 2020] device veth19 left promiscuous mode
[Fri Jan 17 07:22:52 2020] br0: port 4(veth19) entered disabled state
[Fri Jan 17 07:22:52 2020] vethd713dc0: renamed from eth0
[Fri Jan 17 07:22:52 2020] eth0: renamed from veth45b2428
[Fri Jan 17 07:22:52 2020] br0: port 3(veth465) entered disabled state
[Fri Jan 17 07:22:52 2020] br0: port 6(veth466) entered blocking state
[Fri Jan 17 07:22:52 2020] br0: port 6(veth466) entered forwarding state
[Fri Jan 17 07:22:52 2020] eth1: renamed from veth7af776c
[Fri Jan 17 07:22:52 2020] veth6a2ad2f: renamed from eth1
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 1(vetha11775d) entered disabled state
[Fri Jan 17 07:22:52 2020] IPv6: ADDRCONF(NETDEV_CHANGE): veth69d8ae6: link becomes ready
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 8(veth69d8ae6) entered blocking state
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 8(veth69d8ae6) entered forwarding state
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 1(vetha11775d) entered disabled state
[Fri Jan 17 07:22:52 2020] device vetha11775d left promiscuous mode
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 1(vetha11775d) entered disabled state
[Fri Jan 17 07:22:52 2020] br0: port 3(veth465) entered disabled state
[Fri Jan 17 07:22:52 2020] device veth465 left promiscuous mode
[Fri Jan 17 07:22:52 2020] br0: port 3(veth465) entered disabled state
[Fri Jan 17 07:22:57 2020] br0: port 3(veth20) entered disabled state
[Fri Jan 17 07:22:57 2020] veth277410c: renamed from eth0
[Fri Jan 17 07:22:57 2020] docker_gwbridge: port 2(vethae76420) entered disabled state
[Fri Jan 17 07:22:57 2020] vethaafcc92: renamed from eth1
[Fri Jan 17 07:22:57 2020] docker_gwbridge: port 2(vethae76420) entered disabled state
[Fri Jan 17 07:22:57 2020] device vethae76420 left promiscuous mode
[Fri Jan 17 07:22:57 2020] docker_gwbridge: port 2(vethae76420) entered disabled state
[Fri Jan 17 07:22:57 2020] br0: port 3(veth20) entered disabled state
[Fri Jan 17 07:22:57 2020] device veth20 left promiscuous mode
[Fri Jan 17 07:22:57 2020] br0: port 3(veth20) entered disabled state
with all docker containers restart at that time
ra#barn-01:~$ date
Fri Jan 17 07:27:45 EST 2020
ra#barn-01:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f0bcccb2a972 google/cadvisor:v0.33.0 "/usr/bin/cadvisor -…" 5 minutes ago Up 5 minutes (healthy) 8080/tcp monitoring_cadvisor.952iedkyjkv6up55rq7i64pc3.pyg7escdnjv4sljelcuyklu4u
9b080faa0dad stefanprodan/caddy:latest "/sbin/tini -- caddy…" 5 minutes ago Up 5 minutes monitoring_dockerd-exporter.952iedkyjkv6up55rq7i64pc3.uxl34e3mgkia4uu348x246jrv
1a33343e0515 stefanprodan/swarmprom-node-exporter:v0.16.0 "/etc/node-exporter/…" 5 minutes ago Up 5 minutes 9100/tcp monitoring_node-exporter.952iedkyjkv6up55rq7i64pc3.uisxqmds8lwhfkx6s6xy96o34
5294e5d15177 registry.speech.one/bakery-elastic:latest "/usr/local/bin/dock…" 5 minutes ago Up 5 minutes (healthy) 9200/tcp, 9300/tcp prod_elastic-1.1.wtc9rdwvy2tspe6bdcz85fe29
fdff305d583d registry.speech.one/bakery-postgres-slave:latest "/docker-entrypoint.…" 5 minutes ago Up 5 minutes 5432/tcp prod_postgres-slave-01.1.h1wbjo4nt1kjqm5x1qda99tb2
8ab7fca1d368 registry.speech.one/bakery-elastic:latest "/usr/local/bin/dock…" 5 minutes ago Up 5 minutes (healthy) 9200/tcp, 9300/tcp preprod_elastic-1.1.8myb31cnrge2oe4ylmhgvno1n
and next journalctl log
sudo journalctl -u docker | tail -n 300
Jan 17 07:15:01 barn-01 dockerd[1649]: time="2020-01-17T07:15:01.107574574-05:00" level=info msg="NetworkDB stats barn-01(96a9a06d3105) - netID:ih3ydbug5d7g4k7wvvqmt5a09 leaving:false netPeers:7 entries:90 Queue qLen:0 netMsg/s:1"
Jan 17 07:15:01 barn-01 dockerd[1649]: time="2020-01-17T07:15:01.107614209-05:00" level=info msg="NetworkDB stats barn-01(96a9a06d3105) - netID:n58omezmixa5vi6z33v4js5l2 leaving:false netPeers:4 entries:36 Queue qLen:0 netMsg/s:0"
Jan 17 07:15:01 barn-01 dockerd[1649]: time="2020-01-17T07:15:01.107646380-05:00" level=info msg="NetworkDB stats barn-01(96a9a06d3105) - netID:myn3onq0xgdgenfc5i7zhm7ai leaving:false netPeers:11 entries:32 Queue qLen:0 netMsg/s:0"
Jan 17 07:20:01 barn-01 dockerd[1649]: time="2020-01-17T07:20:01.307417453-05:00" level=info msg="NetworkDB stats barn-01(96a9a06d3105) - netID:n58omezmixa5vi6z33v4js5l2 leaving:false netPeers:4 entries:36 Queue qLen:0 netMsg/s:0"
Jan 17 07:20:01 barn-01 dockerd[1649]: time="2020-01-17T07:20:01.307521188-05:00" level=info msg="NetworkDB stats barn-01(96a9a06d3105) - netID:myn3onq0xgdgenfc5i7zhm7ai leaving:false netPeers:11 entries:32 Queue qLen:0 netMsg/s:0"
Jan 17 07:20:01 barn-01 dockerd[1649]: time="2020-01-17T07:20:01.307555326-05:00" level=info msg="NetworkDB stats barn-01(96a9a06d3105) - netID:k37odopbgoyz9cpv3uilp1h1c leaving:false netPeers:11 entries:91 Queue qLen:0 netMsg/s:0"
Jan 17 07:20:01 barn-01 dockerd[1649]: time="2020-01-17T07:20:01.307595128-05:00" level=info msg="NetworkDB stats barn-01(96a9a06d3105) - netID:ih3ydbug5d7g4k7wvvqmt5a09 leaving:false netPeers:7 entries:90 Queue qLen:0 netMsg/s:0"
Jan 17 07:22:05 barn-01 dockerd[1649]: time="2020-01-17T07:22:05.107656164-05:00" level=info msg="memberlist: Suspect db085bc444b4 has failed, no acks received"
Jan 17 07:22:07 barn-01 dockerd[1649]: time="2020-01-17T07:22:07.555466436-05:00" level=warning msg="memberlist: Refuting a suspect message (from: 716cf5e6e16c)"
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.467911262-05:00" level=error msg="heartbeat to manager {cc8p2g9w23yftc4py6rozjkie 95.213.131.210:2377} failed" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" method="(*session).heartbeat" module=node/agent node.id=952iedkyjkv6up55rq7i64pc3 session.id=da35coq44b4iwzy6godo878eq sessionID=da35coq44b4iwzy6godo878eq
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.468021277-05:00" level=error msg="agent: session failed" backoff=100ms error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node/agent node.id=952iedkyjkv6up55rq7i64pc3
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.468132554-05:00" level=info msg="parsed scheme: \"\"" module=grpc
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.468156403-05:00" level=info msg="scheme \"\" not registered, fallback to default scheme" module=grpc
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.468473713-05:00" level=info msg="ccResolverWrapper: sending update to cc: {[{141.105.66.236:2377 0 <nil>}] <nil>}" module=grpc
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.468503514-05:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.468540108-05:00" level=info msg="manager selected by agent for new session: {yz8061f18re1xpzlalej82t61 141.105.66.236:2377}" module=node/agent node.id=952iedkyjkv6up55rq7i64pc3
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.468586653-05:00" level=info msg="waiting 79.132044ms before registering session" module=node/agent node.id=952iedkyjkv6up55rq7i64pc3
Jan 17 07:22:10 barn-01 dockerd[1649]: time="2020-01-17T07:22:10.107678076-05:00" level=info msg="memberlist: Suspect 716cf5e6e16c has failed, no acks received"
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.355515631-05:00" level=warning msg="memberlist: Refuting a suspect message (from: 619baf78f350)"
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.548130146-05:00" level=error msg="agent: session failed" backoff=300ms error="session initiation timed out" module=node/agent node.id=952iedkyjkv6up55rq7i64pc3
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.548253430-05:00" level=info msg="parsed scheme: \"\"" module=grpc
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.548277506-05:00" level=info msg="scheme \"\" not registered, fallback to default scheme" module=grpc
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.548589274-05:00" level=info msg="ccResolverWrapper: sending update to cc: {[{92.53.64.188:2377 0 <nil>}] <nil>}" module=grpc
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.548620610-05:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.548661631-05:00" level=info msg="manager selected by agent for new session: {l6dndjoram0ptqsf370oe4njw 92.53.64.188:2377}" module=node/agent node.id=952iedkyjkv6up55rq7i64pc3
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.548717648-05:00" level=info msg="waiting 208.067334ms before registering session" module=node/agent node.id=952iedkyjkv6up55rq7i64pc3
Jan 17 07:22:15 barn-01 dockerd[1649]: time="2020-01-17T07:22:15.107515174-05:00" level=info msg="memberlist: Suspect db085bc444b4 has failed, no acks received"
Jan 17 07:22:15 barn-01 dockerd[1649]: time="2020-01-17T07:22:15.947277092-05:00" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Jan 17 07:22:15 barn-01 dockerd[1649]: time="2020-01-17T07:22:15.949633840-05:00" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Jan 17 07:22:15 barn-01 dockerd[1649]: time="2020-01-17T07:22:15.950608296-05:00" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Jan 17 07:22:15 barn-01 dockerd[1649]: time="2020-01-17T07:22:15.951657739-05:00" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Jan 17 07:22:15 barn-01 dockerd[1649]: time="2020-01-17T07:22:15.952975690-05:00" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Jan 17 07:22:16 barn-01 dockerd[1649]: time="2020-01-17T07:22:16.676033275-05:00" level=warning msg="grpc: addrConn.createTransport failed to connect to {141.105.66.236:2377 0 <nil>}. Err :connection error: desc = \"transport: authentication handshake failed: context canceled\". Reconnecting..." module=grpc
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.057434028-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.058114454-05:00" level=warning msg="rmServiceBinding f9b34a20e073bc91ed98cdb9faaa4d8442a757e75f97d1aa28a1e83273c99469 possible transient state ok:false entries:0 set:false "
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.072272034-05:00" level=warning msg="rmServiceBinding 3593899b8d90ee70846fedbbbc9996d4d5f6ab9f37978d7fdc23e724b60866cb possible transient state ok:false entries:0 set:false "
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.072350716-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.675287095-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.725847111-05:00" level=warning msg="rmServiceBinding d85c7160ff7ad71d477f17796b8e786cb716d2c5f46ceff498794a5afeb0fdca possible transient state ok:false entries:0 set:false "
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.796223977-05:00" level=warning msg="7ae0ab97d1a78f4a09e52fe911e447dd5ced1f604623d77a2ae8b97790272630 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7ae0ab97d1a78f4a09e52fe911e447dd5ced1f604623d77a2ae8b97790272630/mounts/shm, flags: 0x2: no such file or directory"
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.922030007-05:00" level=warning msg="79e50c97f9ed5a7fda2eb75f7e891159c6405792af2e917f176a3d2442e63c73 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/79e50c97f9ed5a7fda2eb75f7e891159c6405792af2e917f176a3d2442e63c73/mounts/shm, flags: 0x2: no such file or directory"
Jan 17 07:22:20 barn-01 dockerd[1649]: time="2020-01-17T07:22:20.393450301-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 17 07:22:20 barn-01 dockerd[1649]: time="2020-01-17T07:22:20.451602909-05:00" level=warning msg="rmServiceBinding 938b93a3ac83e44f0eee9b0486f8c1088559a5fb34dd94afa07bdf00d9c4504e possible transient state ok:false entries:0 set:false "
Jan 17 07:22:20 barn-01 dockerd[1649]: time="2020-01-17T07:22:20.632699969-05:00" level=warning msg="5b97bff7a1978a559bbcab3a3e457edb6f60ea54ae2ae209bd23540d5dd29140 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5b97bff7a1978a559bbcab3a3e457edb6f60ea54ae2ae209bd23540d5dd29140/mounts/shm, flags: 0x2: no such file or directory"
Jan 17 07:22:20 barn-01 dockerd[1649]: time="2020-01-17T07:22:20.907207465-05:00" level=warning msg="e329ee187f0f479f50a8b1ada9a24222d41e34c86fe6819cdf471821a66ebda0 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e329ee187f0f479f50a8b1ada9a24222d41e34c86fe6819cdf471821a66ebda0/mounts/shm, flags: 0x2: no such file or directory"
Jan 17 07:22:21 barn-01 dockerd[1649]: time="2020-01-17T07:22:21.735004502-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 17 07:22:21 barn-01 dockerd[1649]: time="2020-01-17T07:22:21.735475195-05:00" level=warning msg="rmServiceBinding 4ee9822b62ef9b74a89fdcced36f818f8f1906502fc6a314e14445868f73dabf possible transient state ok:false entries:0 set:false "
Jan 17 07:22:24 barn-01 dockerd[1649]: time="2020-01-17T07:22:24.145639956-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 17 07:22:24 barn-01 dockerd[1649]: time="2020-01-17T07:22:24.146278837-05:00" level=warning msg="rmServiceBinding 7fed4a57d7ed72527ce39257f2d643724faa788243a682de879df08460cb377f possible transient state ok:false entries:0 set:false "
seems to be connected with
https://github.com/moby/moby/issues/38203
I have periodically this issue. Any ideas why? And how to avoid it ?
Update:
find discussion
https://github.com/systemd/systemd/issues/3374
and fix
https://github.com/docker/libnetwork/pull/2380
which should be in docker 19.03.6
Update: still have the problem in 19.03.6

Related

Docker (Snap) Containers Getting Stopped

I've installed Docker using Snap. Recently running containers have been getting stopped on their own. This happens say 2-3 times in the space of ~8-10 hours. I've been trying to find a root cause without much success. Relevant information below. Let me know if I can provide more information to help.
$ docker --version
Docker version 19.03.13, build cd8016b6bc
$ snap --version
snap 2.51.4
snapd 2.51.4
series 16
ubuntu 18.04
kernel 5.4.0-81-generic
Docker daemon.json
$ cat /var/snap/docker/current/config/daemon.json
{
"log-level": "error",
"storage-driver": "aufs",
"bip": "172.28.0.1/24"
}
$ dmesg -T
[Tue Sep 14 20:31:37 2021] aufs aufs_fill_super:918:mount[18200]: no arg
[Tue Sep 14 20:31:37 2021] overlayfs: missing 'lowerdir'
[Tue Sep 14 20:31:43 2021] br-6c6facc1a891: port 5(veth4c212a4) entered disabled state
[Tue Sep 14 20:31:43 2021] device veth4c212a4 left promiscuous mode
[Tue Sep 14 20:31:43 2021] br-6c6facc1a891: port 5(veth4c212a4) entered disabled state
[Tue Sep 14 20:31:45 2021] br-6c6facc1a891: port 1(veth1c95aae) entered disabled state
[Tue Sep 14 20:31:45 2021] device veth1c95aae left promiscuous mode
[Tue Sep 14 20:31:45 2021] br-6c6facc1a891: port 1(veth1c95aae) entered disabled state
[Tue Sep 14 20:31:45 2021] br-6c6facc1a891: port 4(veth1dfd80e) entered disabled state
[Tue Sep 14 20:31:45 2021] device veth1dfd80e left promiscuous mode
[Tue Sep 14 20:31:45 2021] br-6c6facc1a891: port 4(veth1dfd80e) entered disabled state
[Tue Sep 14 20:31:46 2021] br-6c6facc1a891: port 2(veth8e48cf4) entered disabled state
[Tue Sep 14 20:31:46 2021] device veth8e48cf4 left promiscuous mode
[Tue Sep 14 20:31:46 2021] br-6c6facc1a891: port 2(veth8e48cf4) entered disabled state
[Tue Sep 14 20:31:46 2021] br-6c6facc1a891: port 3(veth534c1d3) entered disabled state
[Tue Sep 14 20:31:46 2021] device veth534c1d3 left promiscuous mode
[Tue Sep 14 20:31:46 2021] br-6c6facc1a891: port 3(veth534c1d3) entered disabled state
[Tue Sep 14 20:31:47 2021] br-6c6facc1a891: port 6(veth316fdd7) entered disabled state
[Tue Sep 14 20:31:47 2021] device veth316fdd7 left promiscuous mode
Note the difference in timestamp between Docker logs, below and dmesg, above.
The Docker logs appear to be from previous time I restarted containers using docker-compose.
$ sudo snap logs docker
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.783211664+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/af7c138e4399d3bb8a5615ec05fd1ba90bc7e98391b468067374a020d792906d.sock: connect: connection refused" id=2b9e8a563dad5f61e2ad525c5d590804c33c6cd323d580fe365c170fd5a68a8a namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.860328985+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/281fedfbf5b11053d28853b6ad6175009903b338995d5faa0862e8f1ab0e3b10.sock: connect: connection refused" id=43449775462debc8336ab1bc63e2020e8a554ee25db31befa561dc790c76e1ac namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.878788076+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/ff2c9cacd1ef1ac083f93e4823f5d0fa4146593f2b6508a098b22270b48507b4.sock: connect: connection refused" id=4d91c4451a011d87b2d21fe7d74e3c4e7ffa20f2df69076f36567b5389597637 namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.906212149+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/017a3907df26803a221be66a2a0ac25e43a994d26432cba30f6c81c078ad62fa.sock: connect: connection refused" id=79e0d419a1d82f83dd81898a02fa1161b909ae88c1e46575a1bec894df31a482 namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.919895281+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/47e9b56ce80402793038edf72fe64b44a05f659371c212361e47d1463ad269ae.sock: connect: connection refused" id=99aba37c4f1521854130601f19afeb196231a924effba1cfcfb7da90b5703a86 namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.931562562+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/58d5711ddbcc9faf6a4d8d7d0433d4254d5069c9e559d61eb1551f80d193a3eb.sock: connect: connection refused" id=a09358b02332b18dfa99b4dc99edf4b1ebac80671c29b91946875a53e1b8bd7e namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.949511272+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/67de51fdf40350feb583255a5e703c719745ef9123a8a47dad72df075c12f953.sock: connect: connection refused" id=ee145dfe0eb44fde323a431b191a62aa47ad265c438239f7243c684e10713042 namespace=moby
2021-09-14T15:01:24Z docker.dockerd[27385]: time="2021-09-14T20:31:24.671615174+05:30" level=error msg="Force shutdown daemon"
2021-09-14T15:01:25Z systemd[1]: Stopped Service for snap application docker.dockerd.
2021-09-14T15:01:37Z systemd[1]: Started Service for snap application docker.dockerd.

Possible to run multiple squid containers on a single host?

I am trying to run multiple squid containers whose configs are built at container run time. Each container needs to route traffic independently from the other. Aside from where traffic is forwarded on, the configs are the same.
I can get a single squid container running and doing what I need it to with no problems.
docker run -v /var/log/squid:/var/log/squid -p 3133-3138:3133-3138 my_images/squid_test:version1.0
Trying to run a second container with:
docker run -v /var/log/squid:/var/log/squid -p 4133-4138:3133-3138 my_images/squid_test:version1.0
This instantly spits out: Aborted (core dumped)
I have one other container running on port 9000 but thats it.
This is a syslog dump from the host at the time the second container launch is attempted
Jun 18 04:45:17 dockerdevr1 kernel: [84821.356170] docker0: port 3(veth89ab0c1) entered blocking state
Jun 18 04:45:17 dockerdevr1 kernel: [84821.356172] docker0: port 3(veth89ab0c1) entered disabled state
Jun 18 04:45:17 dockerdevr1 kernel: [84821.356209] device veth89ab0c1 entered promiscuous mode
Jun 18 04:45:17 dockerdevr1 kernel: [84821.356252] IPv6: ADDRCONF(NETDEV_UP): veth89ab0c1: link is not ready
Jun 18 04:45:17 dockerdevr1 systemd-networkd[765]: veth89ab0c1: Link UP
Jun 18 04:45:17 dockerdevr1 networkd-dispatcher[1048]: WARNING:Unknown index 421 seen, reloading interface list
Jun 18 04:45:17 dockerdevr1 systemd-udevd[25899]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 18 04:45:17 dockerdevr1 systemd-udevd[25900]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 18 04:45:17 dockerdevr1 systemd-udevd[25899]: Could not generate persistent MAC address for vethb0dffb8: No such file or directory
Jun 18 04:45:17 dockerdevr1 systemd-udevd[25900]: Could not generate persistent MAC address for veth89ab0c1: No such file or directory
Jun 18 04:45:17 dockerdevr1 containerd[1119]: time="2020-06-18T04:45:17.567627817Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/85f0acae4a948ed16b3b29988291b5df3d052b10d1965f1198745966e63c3732/shim.sock" debug=false pid=25920
Jun 18 04:45:17 dockerdevr1 kernel: [84821.841905] eth0: renamed from vethb0dffb8
Jun 18 04:45:17 dockerdevr1 kernel: [84821.858172] IPv6: ADDRCONF(NETDEV_CHANGE): veth89ab0c1: link becomes ready
Jun 18 04:45:17 dockerdevr1 kernel: [84821.858263] docker0: port 3(veth89ab0c1) entered blocking state
Jun 18 04:45:17 dockerdevr1 kernel: [84821.858265] docker0: port 3(veth89ab0c1) entered forwarding state
Jun 18 04:45:17 dockerdevr1 systemd-networkd[765]: veth89ab0c1: Gained carrier
Jun 18 04:45:19 dockerdevr1 systemd-networkd[765]: veth89ab0c1: Gained IPv6LL
Jun 18 04:45:19 dockerdevr1 containerd[1119]: time="2020-06-18T04:45:19.221654620Z" level=info msg="shim reaped" id=85f0acae4a948ed16b3b29988291b5df3d052b10d1965f1198745966e63c3732
Jun 18 04:45:19 dockerdevr1 dockerd[1171]: time="2020-06-18T04:45:19.232623257Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 18 04:45:19 dockerdevr1 systemd-networkd[765]: veth89ab0c1: Lost carrier
Jun 18 04:45:19 dockerdevr1 kernel: [84823.251203] docker0: port 3(veth89ab0c1) entered disabled state
Jun 18 04:45:19 dockerdevr1 kernel: [84823.254402] vethb0dffb8: renamed from eth0
Jun 18 04:45:19 dockerdevr1 systemd-networkd[765]: veth89ab0c1: Link DOWN
Jun 18 04:45:19 dockerdevr1 kernel: [84823.293507] docker0: port 3(veth89ab0c1) entered disabled state
Jun 18 04:45:19 dockerdevr1 kernel: [84823.294577] device veth89ab0c1 left promiscuous mode
Jun 18 04:45:19 dockerdevr1 kernel: [84823.294580] docker0: port 3(veth89ab0c1) entered disabled state
Jun 18 04:45:19 dockerdevr1 networkd-dispatcher[1048]: WARNING:Unknown index 420 seen, reloading interface list
Jun 18 04:45:19 dockerdevr1 networkd-dispatcher[1048]: ERROR:Unknown interface index 420 seen even after reload
Jun 18 04:45:19 dockerdevr1 systemd-udevd[26041]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 18 04:45:19 dockerdevr1 systemd-udevd[26041]: link_config: could not get ethtool features for vethb0dffb8
Jun 18 04:45:19 dockerdevr1 systemd-udevd[26041]: Could not set offload features of vethb0dffb8: No such device
Has anyone tried something similar to this? I know I can get multiple nginx containers running on different ports. Any insight would be greatly appreciated!

Unsuccessful build on gitlab runner

Off recently we are facing the below issue while performing a CI/CD build from gitlab runner.
Below is the log snippet from /var/log/syslog.
pr 22 03:02:04 cirunner dockerd[1103]: time="2019-04-22T03:02:04.136857571Z" level=error msg="Handler for DELETE /v1.18/containers/runner-301e5f4d-project-786-concurrent-0-build-4 returned error: No such container: runner-301e5f4d-project-786-concurrent-0-build-4"
Apr 22 03:02:04 cirunner kernel: [1616845.656927] aufs au_opts_verify:1597:dockerd[1568]: dirperm1 breaks the protection by the permission bits on the lower branch
Apr 22 03:02:04 cirunner kernel: [1616846.186616] aufs au_opts_verify:1597:dockerd[1568]: dirperm1 breaks the protection by the permission bits on the lower branch
Apr 22 03:02:05 cirunner kernel: [1616846.383784] aufs au_opts_verify:1597:dockerd[1568]: dirperm1 breaks the protection by the permission bits on the lower branch
Apr 22 03:02:05 cirunner systemd-udevd[1187]: Could not generate persistent MAC address for veth0675b93: No such file or directory
Apr 22 03:02:05 cirunner kernel: [1616846.385245] device veth8b64bcd entered promiscuous mode
Apr 22 03:02:05 cirunner kernel: [1616846.385299] IPv6: ADDRCONF(NETDEV_UP): veth8b64bcd: link is not ready
Apr 22 03:02:05 cirunner systemd-udevd[1188]: Could not generate persistent MAC address for veth8b64bcd: No such file or directory
Apr 22 03:02:05 cirunner kernel: [1616846.788755] eth0: renamed from veth0675b93
Apr 22 03:02:05 cirunner kernel: [1616846.804716] IPv6: ADDRCONF(NETDEV_CHANGE): veth8b64bcd: link becomes ready
Apr 22 03:02:05 cirunner kernel: [1616846.804739] docker0: port 3(veth8b64bcd) entered forwarding state
Apr 22 03:02:05 cirunner kernel: [1616846.804747] docker0: port 3(veth8b64bcd) entered forwarding state
Apr 22 03:02:20 cirunner kernel: [1616861.819201] docker0: port 3(veth8b64bcd) entered forwarding state
Apr 22 03:37:13 cirunner dockerd[1103]: time="2019-04-22T03:37:13.298195303Z" level=error msg="Handler for GET
/v1.18/containers/6f6b71442b5bbc70f980cd05272c8f05d514735f39e9b73b52a094a0e87db475/json returned error: No such container: 6f6b71442b5bbc70f980cd05272c8f05d514735f39e9b73b52a094a0e87db475"
Could you please help me out what exactly is the issue and how can to trouble shoot.
Let me know if you require additional details from my side.

docker service create container existing

hello team when i create service in docker swarm , then with instantly containers are existing with 0 code below are logs
Feb 28 07:32:36 ip-172-31-18-123 kernel: IPVS: Creating netns size=2040 id=417
Feb 28 07:32:36 ip-172-31-18-123 NetworkManager[528]: <info> [1519803156.2518] device (vethb31b4b5): link connected
Feb 28 07:32:36 ip-172-31-18-123 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb31b4b5: link becomes ready
Feb 28 07:32:36 ip-172-31-18-123 kernel: docker0: port 3(vethb31b4b5) entered blocking state
Feb 28 07:32:36 ip-172-31-18-123 kernel: docker0: port 3(vethb31b4b5) entered forwarding state
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.312181706Z" level=warning msg="unknown container" container=4ac8ae6d6f542a7a7b361f7249fd749eed9b6489155f3f051b0b4f5bbbb3d0b2 module=libcontainerd namespace=plugins.moby
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.330172710Z" level=warning msg="unknown container" container=4ac8ae6d6f542a7a7b361f7249fd749eed9b6489155f3f051b0b4f5bbbb3d0b2 module=libcontainerd namespace=plugins.moby
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.361597892Z" level=warning msg="unknown container" container=4ac8ae6d6f542a7a7b361f7249fd749eed9b6489155f3f051b0b4f5bbbb3d0b2 module=libcontainerd namespace=plugins.moby
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36Z" level=info msg="shim reaped" id=4ac8ae6d6f542a7a7b361f7249fd749eed9b6489155f3f051b0b4f5bbbb3d0b2 module="containerd/tasks"
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.402480985Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.402535187Z" level=info msg="ignoring event" module=libcontainerd namespace=plugins.moby topic=/tasks/delete type="*events.TaskDelete"
Feb 28 07:32:36 ip-172-31-18-123 kernel: docker0: port 3(vethb31b4b5) entered disabled state
Feb 28 07:32:36 ip-172-31-18-123 NetworkManager[528]: <info> [1519803156.4258] manager: (vethd1102f2): new Veth device (/org/freedesktop/NetworkManager/Devices/4335)
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.425967110Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.425987752Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.426011251Z" level=error msg="pulling image failed" error="pull access denied for ubunut, repository does not exist or may require 'docker login'" module=node/agent/taskmanager node.id=6vd6hq8l81ztlpaih0xwn6y0v service.id=8yfn38lxo6ej2244vqbnx4m0k task.id=szdix3oeko8b8e7cyg0pwpjea
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.426589500Z" level=erro
run any foreground process in your docker image, then you can able to create service

Docker overrides the IP address of my own manually created bridge

I am trying to set docker up to connect all containers to my own manually created bridge (br0), I don't want docker to create or edit anything in my bridge, because I have other services which uses and depends on my bridge (like OpenVPN) therefore I prefer to create the bridge using my own bash script.
The problem comes when I start docker service, docker changes my bridge IP address from what I want (192.168.1.10) to something else address(169.254.x.x)!!!
My Docker version 1.12.1, build 23cf638
The steps I did
Bridge creation:
sudo brctl addbr br0
sudo brctl addif br0 eth0
sudo ip addr del 192.168.1.10/24 dev eth0
sudo ip addr add 192.168.1.10/24 dev br0
sudo ip route add default via 192.168.1.1 dev br0
I also deleted the default docker0 brdige.
Tell docker to use my br0 instead of the default docker0:
Passing -b br0 parameter to dockerd.service starting script to tell docker that I want him to use my br0:
sudo vi /etc/systemd/system/docker.service.d/overlay.conf
I edited ExecStart to be like this:
ExecStart=/usr/bin/dockerd --storage-driver=overlay -H fd:// -b=br0
and then:
sudo systemctl daemon-reload
sudo systemctl restart docker
And now when I check my br0 IP, it is NOT 192.168.1.10 any more, it is back to 172.17.x.x, and when I try to change it now manually back to 192.168.1.10, the interfaces in containers keeps using 169.254.x.x instead of the IP I want.
P.s. when I check where are the interfaces of my containers: brctl show, they are really in my br0 (that means docker accepted -b br0 paramter, but it just ignores or override my intended IP address).
Could some one help me please to over come that problem? it looks for me like a bug maybe. I just want docker to use my br0 with the intended IP address 192.168.1.10.
My need is that all my containers get and IP address in the range I want.
Thanks in advance.
Edited:
My /var/log/daemon.log
Oct 10 20:41:12 raspberrypi systemd[1]: Stopping Docker Application Container Engine...
Oct 10 20:41:12 raspberrypi dockerd[976]: time="2016-10-10T20:41:12.067551389Z" level=info msg="Processing signal 'terminated'"
Oct 10 20:41:12 raspberrypi dockerd[976]: time="2016-10-10T20:41:12.128388194Z" level=info msg="stopping containerd after receiving terminated"
Oct 10 20:41:13 raspberrypi systemd[1]: Stopped Docker Application Container Engine.
Oct 10 20:41:13 raspberrypi systemd[1]: Stopping Docker Socket for the API.
Oct 10 20:41:13 raspberrypi systemd[1]: Closed Docker Socket for the API.
Oct 10 20:41:13 raspberrypi systemd[1]: Stopped Docker Application Container Engine.
Oct 10 20:41:50 raspberrypi avahi-daemon[440]: Withdrawing address record for 169.254.124.135 on br0.
Oct 10 20:41:50 raspberrypi dhcpcd[698]: br0: removing IP address 169.254.124.135/16
Oct 10 20:41:50 raspberrypi avahi-daemon[440]: Leaving mDNS multicast group on interface br0.IPv4 with address 169.254.124.135.
Oct 10 20:41:50 raspberrypi avahi-daemon[440]: Interface br0.IPv4 no longer relevant for mDNS.
Oct 10 20:41:50 raspberrypi dhcpcd[698]: br0: deleting route to 169.254.0.0/16
Oct 10 20:41:52 raspberrypi ntpd[723]: Deleting interface #7 br0, 169.254.124.135#123, interface stats: received=0, sent=0, dropped=0, active_time=516 secs
Oct 10 20:41:52 raspberrypi ntpd[723]: peers refreshed
Oct 10 20:42:58 raspberrypi avahi-daemon[440]: Joining mDNS multicast group on interface br0.IPv4 with address 192.168.1.19.
Oct 10 20:42:58 raspberrypi avahi-daemon[440]: New relevant interface br0.IPv4 for mDNS.
Oct 10 20:42:58 raspberrypi avahi-daemon[440]: Registering new address record for 192.168.1.19 on br0.IPv4.
Oct 10 20:43:00 raspberrypi ntpd[723]: Listen normally on 8 br0 192.168.1.19 UDP 123
Oct 10 20:43:00 raspberrypi ntpd[723]: peers refreshed
Oct 10 20:43:15 raspberrypi systemd[1]: getty#tty1.service has no holdoff time, scheduling restart.
Oct 10 20:43:15 raspberrypi systemd[1]: Stopping Getty on tty1...
Oct 10 20:43:15 raspberrypi systemd[1]: Starting Getty on tty1...
Oct 10 20:43:15 raspberrypi systemd[1]: Started Getty on tty1.
Oct 10 20:43:21 raspberrypi systemd[1]: getty#tty1.service has no holdoff time, scheduling restart.
Oct 10 20:43:21 raspberrypi systemd[1]: Stopping Getty on tty1...
Oct 10 20:43:21 raspberrypi systemd[1]: Starting Getty on tty1...
Oct 10 20:43:21 raspberrypi systemd[1]: Started Getty on tty1.
Oct 10 20:44:31 raspberrypi systemd[1]: Starting Docker Socket for the API.
Oct 10 20:44:31 raspberrypi systemd[1]: Listening on Docker Socket for the API.
Oct 10 20:44:31 raspberrypi systemd[1]: Starting Docker Application Container Engine...
Oct 10 20:44:31 raspberrypi dockerd[1536]: time="2016-10-10T20:44:31.887581128Z" level=info msg="libcontainerd: new containerd process, pid: 1543"
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.903109872Z" level=info msg="[graphdriver] using prior storage driver \"overlay\""
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.950908429Z" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.951611338Z" level=warning msg="Your kernel does not support swap memory limit."
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.951800086Z" level=warning msg="Your kernel does not support kernel memory limit."
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.951906179Z" level=warning msg="Your kernel does not support cgroup cfs period"
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.951993522Z" level=warning msg="Your kernel does not support cgroup cfs quotas"
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.952173520Z" level=warning msg="Unable to find cpuset cgroup in mounts"
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.952372059Z" level=warning msg="mountpoint for pids not found"
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.953406319Z" level=info msg="Loading containers: start."
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.970612440Z" level=info msg="Firewalld running: false"
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.953406319Z" level=info msg="Loading containers: start."
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.970612440Z" level=info msg="Firewalld running: false"
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Withdrawing address record for 192.168.1.19 on br0.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Leaving mDNS multicast group on interface br0.IPv4 with address 192.168.1.19.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Interface br0.IPv4 no longer relevant for mDNS.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Joining mDNS multicast group on interface br0.IPv4 with address 169.254.124.135.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: New relevant interface br0.IPv4 for mDNS.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Registering new address record for 169.254.124.135 on br0.IPv4.
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.715576231Z" level=info msg="Loading containers: done."
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.715837582Z" level=info msg="Daemon has completed initialization"
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.715921435Z" level=info msg="Docker daemon" commit=23cf638 graphdriver=overlay version=1.12.1
Oct 10 20:44:33 raspberrypi systemd[1]: Started Docker Application Container Engine.
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.754984356Z" level=info msg="API listen on /var/run/docker.sock"
Oct 10 20:44:34 raspberrypi ntpd[723]: Listen normally on 9 br0 169.254.124.135 UDP 123
Oct 10 20:44:34 raspberrypi ntpd[723]: Deleting interface #8 br0, 192.168.1.19#123, interface stats: received=0, sent=0, dropped=0, active_time=94 secs
Oct 10 20:44:34 raspberrypi ntpd[723]: peers refreshed
The interesting part is the last part (I recopied it here bellow):
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Withdrawing address record for 192.168.1.19 on br0.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Leaving mDNS multicast group on interface br0.IPv4 with address 192.168.1.19.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Interface br0.IPv4 no longer relevant for mDNS.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Joining mDNS multicast group on interface br0.IPv4 with address 169.254.124.135.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: New relevant interface br0.IPv4 for mDNS.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Registering new address record for 169.254.124.135 on br0.IPv4.
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.715576231Z" level=info msg="Loading containers: done."
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.715837582Z" level=info msg="Daemon has completed initialization"
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.715921435Z" level=info msg="Docker daemon" commit=23cf638 graphdriver=overlay version=1.12.1
Oct 10 20:44:33 raspberrypi systemd[1]: Started Docker Application Container Engine.
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.754984356Z" level=info msg="API listen on /var/run/docker.sock"
Oct 10 20:44:34 raspberrypi ntpd[723]: Listen normally on 9 br0 169.254.124.135 UDP 123
Oct 10 20:44:34 raspberrypi ntpd[723]: Deleting interface #8 br0, 192.168.1.19#123, interface stats: received=0, sent=0, dropped=0, active_time=94
Once the docker container is running the network configuration is not editable. Try running your docker container with --bip=CIDR and set your bridge ip manually. For more info follow here.

Resources