I am trying to run multiple squid containers whose configs are built at container run time. Each container needs to route traffic independently from the other. Aside from where traffic is forwarded on, the configs are the same.
I can get a single squid container running and doing what I need it to with no problems.
docker run -v /var/log/squid:/var/log/squid -p 3133-3138:3133-3138 my_images/squid_test:version1.0
Trying to run a second container with:
docker run -v /var/log/squid:/var/log/squid -p 4133-4138:3133-3138 my_images/squid_test:version1.0
This instantly spits out: Aborted (core dumped)
I have one other container running on port 9000 but thats it.
This is a syslog dump from the host at the time the second container launch is attempted
Jun 18 04:45:17 dockerdevr1 kernel: [84821.356170] docker0: port 3(veth89ab0c1) entered blocking state
Jun 18 04:45:17 dockerdevr1 kernel: [84821.356172] docker0: port 3(veth89ab0c1) entered disabled state
Jun 18 04:45:17 dockerdevr1 kernel: [84821.356209] device veth89ab0c1 entered promiscuous mode
Jun 18 04:45:17 dockerdevr1 kernel: [84821.356252] IPv6: ADDRCONF(NETDEV_UP): veth89ab0c1: link is not ready
Jun 18 04:45:17 dockerdevr1 systemd-networkd[765]: veth89ab0c1: Link UP
Jun 18 04:45:17 dockerdevr1 networkd-dispatcher[1048]: WARNING:Unknown index 421 seen, reloading interface list
Jun 18 04:45:17 dockerdevr1 systemd-udevd[25899]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 18 04:45:17 dockerdevr1 systemd-udevd[25900]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 18 04:45:17 dockerdevr1 systemd-udevd[25899]: Could not generate persistent MAC address for vethb0dffb8: No such file or directory
Jun 18 04:45:17 dockerdevr1 systemd-udevd[25900]: Could not generate persistent MAC address for veth89ab0c1: No such file or directory
Jun 18 04:45:17 dockerdevr1 containerd[1119]: time="2020-06-18T04:45:17.567627817Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/85f0acae4a948ed16b3b29988291b5df3d052b10d1965f1198745966e63c3732/shim.sock" debug=false pid=25920
Jun 18 04:45:17 dockerdevr1 kernel: [84821.841905] eth0: renamed from vethb0dffb8
Jun 18 04:45:17 dockerdevr1 kernel: [84821.858172] IPv6: ADDRCONF(NETDEV_CHANGE): veth89ab0c1: link becomes ready
Jun 18 04:45:17 dockerdevr1 kernel: [84821.858263] docker0: port 3(veth89ab0c1) entered blocking state
Jun 18 04:45:17 dockerdevr1 kernel: [84821.858265] docker0: port 3(veth89ab0c1) entered forwarding state
Jun 18 04:45:17 dockerdevr1 systemd-networkd[765]: veth89ab0c1: Gained carrier
Jun 18 04:45:19 dockerdevr1 systemd-networkd[765]: veth89ab0c1: Gained IPv6LL
Jun 18 04:45:19 dockerdevr1 containerd[1119]: time="2020-06-18T04:45:19.221654620Z" level=info msg="shim reaped" id=85f0acae4a948ed16b3b29988291b5df3d052b10d1965f1198745966e63c3732
Jun 18 04:45:19 dockerdevr1 dockerd[1171]: time="2020-06-18T04:45:19.232623257Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 18 04:45:19 dockerdevr1 systemd-networkd[765]: veth89ab0c1: Lost carrier
Jun 18 04:45:19 dockerdevr1 kernel: [84823.251203] docker0: port 3(veth89ab0c1) entered disabled state
Jun 18 04:45:19 dockerdevr1 kernel: [84823.254402] vethb0dffb8: renamed from eth0
Jun 18 04:45:19 dockerdevr1 systemd-networkd[765]: veth89ab0c1: Link DOWN
Jun 18 04:45:19 dockerdevr1 kernel: [84823.293507] docker0: port 3(veth89ab0c1) entered disabled state
Jun 18 04:45:19 dockerdevr1 kernel: [84823.294577] device veth89ab0c1 left promiscuous mode
Jun 18 04:45:19 dockerdevr1 kernel: [84823.294580] docker0: port 3(veth89ab0c1) entered disabled state
Jun 18 04:45:19 dockerdevr1 networkd-dispatcher[1048]: WARNING:Unknown index 420 seen, reloading interface list
Jun 18 04:45:19 dockerdevr1 networkd-dispatcher[1048]: ERROR:Unknown interface index 420 seen even after reload
Jun 18 04:45:19 dockerdevr1 systemd-udevd[26041]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 18 04:45:19 dockerdevr1 systemd-udevd[26041]: link_config: could not get ethtool features for vethb0dffb8
Jun 18 04:45:19 dockerdevr1 systemd-udevd[26041]: Could not set offload features of vethb0dffb8: No such device
Has anyone tried something similar to this? I know I can get multiple nginx containers running on different ports. Any insight would be greatly appreciated!
Related
Thank you for checking this.
Ubuntu 18 server on AWS EC2, docker-compose up was running just fine, suddenly it stopped building after a reboot. Not sure what changed.
Here is the docker-compose.yml
version: '2'
services:
web:
build: .
restart: "no"
command: gulp serve --max_new_space_size=8192 --max-old-space-size=8192 -LLLL
env_file:
- .env
volumes:
- .:/app/code
ports:
- "8050:8000"
- "8005:8005"
- "8888:8888"
Here is the Dockerfile
FROM node:6.10.3
RUN mkdir /app
RUN mkdir /app/code
WORKDIR /app
# Install JavaScript requirements
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install -d
RUN npm rebuild node-sass
# Link gulp
RUN ln -s /app/node_modules/.bin/gulp /usr/bin/gulp
COPY . /app/code/
WORKDIR /app/code
RUN export NODE_OPTIONS="--max-old-space-size=8192"
# Build webpack files
RUN gulp build
EXPOSE 8000
CMD gulp serve
I see some errors in the syslog, not sure if it is related.
Jun 2 15:25:24 ip-10-0-1-194 kernel: [52500.188965] docker0: port 1(veth638f141) entered blocking state
Jun 2 15:25:24 ip-10-0-1-194 kernel: [52500.188968] docker0: port 1(veth638f141) entered disabled state
Jun 2 15:25:24 ip-10-0-1-194 kernel: [52500.189101] device veth638f141 entered promiscuous mode
Jun 2 15:25:24 ip-10-0-1-194 systemd-networkd[734]: veth638f141: Link UP
Jun 2 15:25:24 ip-10-0-1-194 networkd-dispatcher[947]: WARNING:Unknown index 338 seen, reloading interface list
Jun 2 15:25:24 ip-10-0-1-194 systemd-udevd[5940]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 2 15:25:24 ip-10-0-1-194 systemd-udevd[5940]: Could not generate persistent MAC address for veth3a08f68: No such file or directory
Jun 2 15:25:24 ip-10-0-1-194 systemd-udevd[5941]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 2 15:25:24 ip-10-0-1-194 systemd-udevd[5941]: Could not generate persistent MAC address for veth638f141: No such file or directory
Jun 2 15:25:24 ip-10-0-1-194 containerd[993]: time="2021-06-02T15:25:24.995557031Z" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/7712e133ca3de4a4d407341b7a51428e984c4bcbf2311e27ffbd43cbff56ef44 pid=6001
Jun 2 15:25:25 ip-10-0-1-194 kernel: [52500.489989] eth0: renamed from veth3a08f68
Jun 2 15:25:25 ip-10-0-1-194 systemd-networkd[734]: veth638f141: Gained carrier
Jun 2 15:25:25 ip-10-0-1-194 systemd-networkd[734]: docker0: Gained carrier
Jun 2 15:25:25 ip-10-0-1-194 kernel: [52500.509809] IPv6: ADDRCONF(NETDEV_CHANGE): veth638f141: link becomes ready
Jun 2 15:25:25 ip-10-0-1-194 kernel: [52500.509869] docker0: port 1(veth638f141) entered blocking state
Jun 2 15:25:25 ip-10-0-1-194 kernel: [52500.509870] docker0: port 1(veth638f141) entered forwarding state
Jun 2 15:25:26 ip-10-0-1-194 systemd-networkd[734]: veth638f141: Gained IPv6LL
Jun 2 15:25:27 ip-10-0-1-194 containerd[993]: time="2021-06-02T15:25:27.979112078Z" level=info msg="shim disconnected" id=7712e133ca3de4a4d407341b7a51428e984c4bcbf2311e27ffbd43cbff56ef44
Jun 2 15:25:27 ip-10-0-1-194 dockerd[1010]: time="2021-06-02T15:25:27.979239439Z" level=info msg="ignoring event" container=7712e133ca3de4a4d407341b7a51428e984c4bcbf2311e27ffbd43cbff56ef44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 2 15:25:28 ip-10-0-1-194 kernel: [52503.292104] docker0: port 1(veth638f141) entered disabled state
Jun 2 15:25:28 ip-10-0-1-194 kernel: [52503.292214] veth3a08f68: renamed from eth0
Jun 2 15:25:28 ip-10-0-1-194 systemd-networkd[734]: veth638f141: Lost carrier
Jun 2 15:25:28 ip-10-0-1-194 systemd-udevd[6146]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 2 15:25:28 ip-10-0-1-194 systemd-networkd[734]: veth638f141: Link DOWN
Jun 2 15:25:28 ip-10-0-1-194 kernel: [52503.350623] docker0: port 1(veth638f141) entered disabled state
Jun 2 15:25:28 ip-10-0-1-194 kernel: [52503.353895] device veth638f141 left promiscuous mode
Jun 2 15:25:28 ip-10-0-1-194 kernel: [52503.353912] docker0: port 1(veth638f141) entered disabled state
Jun 2 15:25:28 ip-10-0-1-194 networkd-dispatcher[947]: WARNING:Unknown index 337 seen, reloading interface list
Jun 2 15:25:28 ip-10-0-1-194 networkd-dispatcher[947]: **ERROR:Unknown interface index 337 seen even after reload**
Jun 2 15:25:29 ip-10-0-1-194 systemd-networkd[734]: docker0: Lost carrier
I have been going for hours trying to understand why docker just so happen to doesn't work on my machine. I am using Ubuntu 18.01 Xfce. Have installed docker using the official site and tried to test run an image using docker container run -it -p 8000:80 nginx command. The first it ran ok, but only I try another time the localhost goes into an endless loop of loading. It only works after I restart docker and then only the first time. I also tried editing docker.service file into:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:2375
as instructed on the https://docs.docker.com/install/linux/linux-postinstall/ again no change.
Is there a solution to this problem? Is it and OS problem? Some kind of conflict problem? If so, is there a solution?
UPDATE:
This what ip addr show docker0 command is showing when container is running:
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:4e:4b:8e:af brd ff:ff:ff:ff:ff:ff
inet6 fe80::42:4eff:fe4b:8eaf/64 scope link
valid_lft forever preferred_lft forever
UPDATE 2:
By entering the commen sudo docker run -t -i nginx /bin/bash and reading the tail -f /var/log/syslog I have the following lines:
Feb 5 11:09:43 unkn0wn27-X550VX dockerd[8188]: time="2020-02-05T11:09:43.010897917+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.539911] docker0: port 2(vethb213531) entered disabled state
Feb 5 11:09:43 unkn0wn27-X550VX systemd-networkd[415]: vethb213531: Lost carrier
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.541590] veth4234a1c: renamed from eth0
Feb 5 11:09:43 unkn0wn27-X550VX systemd-timesyncd[994]: Network configuration changed, trying to establish connection.
Feb 5 11:09:43 unkn0wn27-X550VX systemd-udevd[9744]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 5 11:09:43 unkn0wn27-X550VX systemd-timesyncd[994]: Synchronized to time server 194.40.240.12:123 (194.40.240.12).
Feb 5 11:09:43 unkn0wn27-X550VX NetworkManager[1280]: <info> [1580893783.1724] manager: (veth4234a1c): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.641548] audit: type=1107 audit(1580893783.171:310): pid=1164 uid=103 auid=4294967295 ses=4294967295 msg='apparmor="DENIED" operation="dbus_signal" bus="system" path="/org/freedesktop/NetworkManager" interface="org.freedesktop.NetworkManager" member="DeviceAdded" name=":1.13" mask="receive" pid=3643 label="snap.telegram-desktop.telegram-desktop" peer_pid=1280 peer_label="unconfined"
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.641548] exe="/usr/bin/dbus-daemon" sauid=103 hostname=? addr=? terminal=?'
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.644507] audit: type=1107 audit(1580893783.171:311): pid=1164 uid=103 auid=4294967295 ses=4294967295 msg='apparmor="DENIED" operation="dbus_signal" bus="system" path="/org/freedesktop/NetworkManager" interface="org.freedesktop.NetworkManager" member="PropertiesChanged" name=":1.13" mask="receive" pid=3643 label="snap.telegram-desktop.telegram-desktop" peer_pid=1280 peer_label="unconfined"
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.644507] exe="/usr/bin/dbus-daemon" sauid=103 hostname=? addr=? terminal=?'
Feb 5 11:09:43 unkn0wn27-X550VX NetworkManager[1280]: <info> [1580893783.1829] devices added (path: /sys/devices/virtual/net/veth4234a1c, iface: veth4234a1c)
Feb 5 11:09:43 unkn0wn27-X550VX NetworkManager[1280]: <info> [1580893783.1830] device added (path: /sys/devices/virtual/net/veth4234a1c, iface: veth4234a1c): no ifupdown configuration found.
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.653532] IPv6: ADDRCONF(NETDEV_CHANGE): veth4234a1c: link becomes ready
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.653779] docker0: port 2(vethb213531) entered blocking state
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.653784] docker0: port 2(vethb213531) entered forwarding state
Feb 5 11:09:43 unkn0wn27-X550VX networkd-dispatcher[1265]: WARNING:Unknown index 30 seen, reloading interface list
Feb 5 11:09:43 unkn0wn27-X550VX systemd-timesyncd[994]: Network configuration changed, trying to establish connection.
Feb 5 11:09:43 unkn0wn27-X550VX NetworkManager[1280]: <info> [1580893783.1911] device (veth4234a1c): carrier: link connected
Feb 5 11:09:43 unkn0wn27-X550VX NetworkManager[1280]: <info> [1580893783.1934] device (vethb213531): carrier: link connected
Feb 5 11:09:43 unkn0wn27-X550VX systemd-networkd[415]: veth4234a1c: Gained carrier
Feb 5 11:09:43 unkn0wn27-X550VX systemd-networkd[415]: vethb213531: Gained carrier
Feb 5 11:09:43 unkn0wn27-X550VX systemd-timesyncd[994]: Synchronized to time server 194.40.240.12:123 (194.40.240.12).
Feb 5 11:09:43 unkn0wn27-X550VX avahi-daemon[1159]: Interface vethb213531.IPv6 no longer relevant for mDNS.
Feb 5 11:09:43 unkn0wn27-X550VX avahi-daemon[1159]: Leaving mDNS multicast group on interface vethb213531.IPv6 with address fe80::c0e3:82ff:febc:a5c4.
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.746817] docker0: port 2(vethb213531) entered disabled state
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.753883] device vethb213531 left promiscuous mode
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.753893] docker0: port 2(vethb213531) entered disabled state
Feb 5 11:09:43 unkn0wn27-X550VX avahi-daemon[1159]: Withdrawing address record for fe80::c0e3:82ff:febc:a5c4 on vethb213531.
Feb 5 11:09:43 unkn0wn27-X550VX NetworkManager[1280]: <info> [1580893783.3453] devices removed (path: /sys/devices/virtual/net/veth4234a1c, iface: veth4234a1c)
Feb 5 11:09:43 unkn0wn27-X550VX NetworkManager[1280]: <info> [1580893783.3457] devices removed (path: /sys/devices/virtual/net/vethb213531, iface: vethb213531)
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.816316] audit: type=1107 audit(1580893783.343:312): pid=1164 uid=103 auid=4294967295 ses=4294967295 msg='apparmor="DENIED" operation="dbus_signal" bus="system" path="/org/freedesktop/NetworkManager" interface="org.freedesktop.NetworkManager" member="DeviceRemoved" name=":1.13" mask="receive" pid=3643 label="snap.telegram-desktop.telegram-desktop" peer_pid=1280 peer_label="unconfined"
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.816316] exe="/usr/bin/dbus-daemon" sauid=103 hostname=? addr=? terminal=?'
Feb 5 11:09:43 unkn0wn27-X550VX NetworkManager[1280]: <info> [1580893783.3528] device (vethb213531): released from master device docker0
Feb 5 11:09:43 unkn0wn27-X550VX systemd-networkd[415]: veth4234a1c: Lost carrier
Feb 5 11:09:43 unkn0wn27-X550VX systemd-timesyncd[994]: Network configuration changed, trying to establish connection.
Feb 5 11:09:43 unkn0wn27-X550VX systemd-networkd[415]: veth4234a1c: Removing non-existent address: fe80::42:acff:fe11:3/64 (valid forever), ignoring
Feb 5 11:09:43 unkn0wn27-X550VX systemd-networkd[415]: vethb213531: Lost carrier
Feb 5 11:09:43 unkn0wn27-X550VX systemd-timesyncd[994]: Synchronized to time server 194.40.240.12:123 (194.40.240.12).
Feb 5 11:09:43 unkn0wn27-X550VX dockerd[8188]: time="2020-02-05T11:09:43.530168634+02:00" level=warning msg="8f0463438568dab68c318d3fb928d800b9ca6ec99a918bb06bf8aea4886efa48 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8f0463438568dab68c318d3fb928d800b9ca6ec99a918bb06bf8aea4886efa48/mounts/shm, flags: 0x2: no such file or directory"
I only spotted these lines:
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.746817] docker0: port 2(vethb213531) entered disabled state
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.753883] device vethb213531 left promiscuous mode
Feb 5 11:09:43 unkn0wn27-X550VX kernel: [ 1992.753893] docker0: port 2(vethb213531) entered disabled state
And this line:
Feb 5 11:09:43 unkn0wn27-X550VX NetworkManager[1280]: <info> [1580893783.3528] device (vethb213531): released from master device docker0
Not sure if that helps my case.
UPDATE 3:
A work around solution is to constantly write the command sudo ip addr add 172.17.0.1/16 dev docker0.
The problem was that Docker not keeping it's IPV4 address.
All I had to do was open /etc/systemd/network/mynet.network, and add these 2 lines:
[Match]
Name=docker0
[Link]
Unmanaged=yes
Restart systemctl restart systemd-networkd and systemctl restart docker.
All credits for this solution found here: https://vadosware.io/post/a-reliable-fix-to-docker-not-keeping-its-ipv4-address-on-arch/
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
I have next dmesg log
[Fri Jan 17 07:22:25 2020] [UFW BLOCK] IN=enp6s0 OUT= MAC=00:25:90:66:ab:2c:cc:4e:24:f9:de:60:08:00 SRC=185.176.27.162 DST=91.237.249.65 LEN=40 TOS=0x00 PREC=0x00 TTL=246 ID=34473 PROTO=TCP SPT=42928 DPT=4443 WINDOW=1024 RES=0x00 SYN URGP=0
[Fri Jan 17 07:22:44 2020] veth13: renamed from vethdc65e40
[Fri Jan 17 07:22:44 2020] br0: port 3(veth13) entered blocking state
[Fri Jan 17 07:22:44 2020] br0: port 3(veth13) entered disabled state
[Fri Jan 17 07:22:44 2020] device veth13 entered promiscuous mode
[Fri Jan 17 07:22:44 2020] veth14: renamed from vethc274e51
[Fri Jan 17 07:22:44 2020] br0: port 4(veth14) entered blocking state
[Fri Jan 17 07:22:44 2020] br0: port 4(veth14) entered disabled state
[Fri Jan 17 07:22:44 2020] device veth14 entered promiscuous mode
[Fri Jan 17 07:22:44 2020] br0: port 4(veth14) entered blocking state
[Fri Jan 17 07:22:44 2020] br0: port 4(veth14) entered forwarding state
[Fri Jan 17 07:22:44 2020] veth15: renamed from vethade6b91
[Fri Jan 17 07:22:44 2020] br0: port 5(veth15) entered blocking state
[Fri Jan 17 07:22:44 2020] br0: port 5(veth15) entered disabled state
[Fri Jan 17 07:22:44 2020] device veth15 entered promiscuous mode
[Fri Jan 17 07:22:44 2020] br0: port 5(veth15) entered blocking state
[Fri Jan 17 07:22:44 2020] br0: port 5(veth15) entered forwarding state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 5(veth39f34fb) entered blocking state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 5(veth39f34fb) entered disabled state
[Fri Jan 17 07:22:44 2020] device veth39f34fb entered promiscuous mode
[Fri Jan 17 07:22:44 2020] IPv6: ADDRCONF(NETDEV_UP): veth39f34fb: link is not ready
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 5(veth39f34fb) entered blocking state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 5(veth39f34fb) entered forwarding state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 9(veth2dd14ef) entered blocking state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 9(veth2dd14ef) entered disabled state
[Fri Jan 17 07:22:44 2020] device veth2dd14ef entered promiscuous mode
[Fri Jan 17 07:22:44 2020] IPv6: ADDRCONF(NETDEV_UP): veth2dd14ef: link is not ready
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 9(veth2dd14ef) entered blocking state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 9(veth2dd14ef) entered forwarding state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 10(veth253201d) entered blocking state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 10(veth253201d) entered disabled state
[Fri Jan 17 07:22:44 2020] device veth253201d entered promiscuous mode
[Fri Jan 17 07:22:44 2020] IPv6: ADDRCONF(NETDEV_UP): veth253201d: link is not ready
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 10(veth253201d) entered blocking state
[Fri Jan 17 07:22:44 2020] docker_gwbridge: port 10(veth253201d) entered forwarding state
[Fri Jan 17 07:22:45 2020] br0: port 4(veth14) entered disabled state
[Fri Jan 17 07:22:45 2020] br0: port 5(veth15) entered disabled state
[Fri Jan 17 07:22:45 2020] docker_gwbridge: port 5(veth39f34fb) entered disabled state
[Fri Jan 17 07:22:45 2020] docker_gwbridge: port 9(veth2dd14ef) entered disabled state
[Fri Jan 17 07:22:45 2020] docker_gwbridge: port 10(veth253201d) entered disabled state
[Fri Jan 17 07:22:45 2020] eth0: renamed from veth9c853c4
[Fri Jan 17 07:22:45 2020] br0: port 3(veth13) entered blocking state
[Fri Jan 17 07:22:45 2020] br0: port 3(veth13) entered forwarding state
[Fri Jan 17 07:22:45 2020] eth1: renamed from veth38928ee
[Fri Jan 17 07:22:46 2020] IPv6: ADDRCONF(NETDEV_CHANGE): veth39f34fb: link becomes ready
[Fri Jan 17 07:22:46 2020] docker_gwbridge: port 5(veth39f34fb) entered blocking state
[Fri Jan 17 07:22:46 2020] docker_gwbridge: port 5(veth39f34fb) entered forwarding state
[Fri Jan 17 07:22:46 2020] eth0: renamed from veth0a34354
[Fri Jan 17 07:22:46 2020] br0: port 4(veth14) entered blocking state
[Fri Jan 17 07:22:46 2020] br0: port 4(veth14) entered forwarding state
[Fri Jan 17 07:22:46 2020] eth1: renamed from veth3673041
[Fri Jan 17 07:22:46 2020] IPv6: ADDRCONF(NETDEV_CHANGE): veth2dd14ef: link becomes ready
[Fri Jan 17 07:22:46 2020] docker_gwbridge: port 9(veth2dd14ef) entered blocking state
[Fri Jan 17 07:22:46 2020] docker_gwbridge: port 9(veth2dd14ef) entered forwarding state
[Fri Jan 17 07:22:47 2020] br0: port 7(veth11) entered disabled state
[Fri Jan 17 07:22:47 2020] veth1545e82: renamed from eth0
[Fri Jan 17 07:22:47 2020] br0: port 8(veth12) entered disabled state
[Fri Jan 17 07:22:47 2020] vethb08fde7: renamed from eth0
[Fri Jan 17 07:22:47 2020] docker_gwbridge: port 6(vethccff74f) entered disabled state
[Fri Jan 17 07:22:47 2020] veth590e84b: renamed from eth1
[Fri Jan 17 07:22:47 2020] vethc6b5132: renamed from eth1
[Fri Jan 17 07:22:47 2020] docker_gwbridge: port 11(veth36b424b) entered disabled state
[Fri Jan 17 07:22:47 2020] docker_gwbridge: port 6(vethccff74f) entered disabled state
[Fri Jan 17 07:22:47 2020] device vethccff74f left promiscuous mode
[Fri Jan 17 07:22:47 2020] docker_gwbridge: port 6(vethccff74f) entered disabled state
[Fri Jan 17 07:22:47 2020] eth0: renamed from vetha1f2897
[Fri Jan 17 07:22:47 2020] docker_gwbridge: port 11(veth36b424b) entered disabled state
[Fri Jan 17 07:22:47 2020] device veth36b424b left promiscuous mode
[Fri Jan 17 07:22:47 2020] docker_gwbridge: port 11(veth36b424b) entered disabled state
[Fri Jan 17 07:22:47 2020] br0: port 5(veth15) entered blocking state
[Fri Jan 17 07:22:47 2020] br0: port 5(veth15) entered forwarding state
[Fri Jan 17 07:22:47 2020] br0: port 7(veth11) entered disabled state
[Fri Jan 17 07:22:47 2020] device veth11 left promiscuous mode
[Fri Jan 17 07:22:47 2020] br0: port 7(veth11) entered disabled state
[Fri Jan 17 07:22:48 2020] br0: port 8(veth12) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth12 left promiscuous mode
[Fri Jan 17 07:22:48 2020] br0: port 8(veth12) entered disabled state
[Fri Jan 17 07:22:48 2020] eth1: renamed from veth56c6992
[Fri Jan 17 07:22:48 2020] IPv6: ADDRCONF(NETDEV_CHANGE): veth253201d: link becomes ready
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 10(veth253201d) entered blocking state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 10(veth253201d) entered forwarding state
[Fri Jan 17 07:22:48 2020] br0: port 5(veth462) entered disabled state
[Fri Jan 17 07:22:48 2020] vethd306c3b: renamed from eth0
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth7911ac5) entered disabled state
[Fri Jan 17 07:22:48 2020] veth41eb27e: renamed from eth1
[Fri Jan 17 07:22:48 2020] veth467: renamed from veth780af70
[Fri Jan 17 07:22:48 2020] br0: port 4(veth467) entered blocking state
[Fri Jan 17 07:22:48 2020] br0: port 4(veth467) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth467 entered promiscuous mode
[Fri Jan 17 07:22:48 2020] veth22: renamed from veth23b3177
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth7911ac5) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth7911ac5 left promiscuous mode
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth7911ac5) entered disabled state
[Fri Jan 17 07:22:48 2020] br0: port 5(veth22) entered blocking state
[Fri Jan 17 07:22:48 2020] br0: port 5(veth22) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth22 entered promiscuous mode
[Fri Jan 17 07:22:48 2020] veth466: renamed from vethc0be03e
[Fri Jan 17 07:22:48 2020] br0: port 6(veth466) entered blocking state
[Fri Jan 17 07:22:48 2020] br0: port 6(veth466) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth466 entered promiscuous mode
[Fri Jan 17 07:22:48 2020] veth21: renamed from vethd3d6c20
[Fri Jan 17 07:22:48 2020] br0: port 6(veth21) entered blocking state
[Fri Jan 17 07:22:48 2020] br0: port 6(veth21) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth21 entered promiscuous mode
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 6(veth2ec5aab) entered blocking state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 6(veth2ec5aab) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth2ec5aab entered promiscuous mode
[Fri Jan 17 07:22:48 2020] IPv6: ADDRCONF(NETDEV_UP): veth2ec5aab: link is not ready
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 6(veth2ec5aab) entered blocking state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 6(veth2ec5aab) entered forwarding state
[Fri Jan 17 07:22:48 2020] br0: port 5(veth462) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth462 left promiscuous mode
[Fri Jan 17 07:22:48 2020] br0: port 5(veth462) entered disabled state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 6(veth2ec5aab) entered disabled state
[Fri Jan 17 07:22:48 2020] br0: port 6(veth10) entered disabled state
[Fri Jan 17 07:22:48 2020] vethcc03a84: renamed from eth0
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth69d8ae6) entered blocking state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth69d8ae6) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth69d8ae6 entered promiscuous mode
[Fri Jan 17 07:22:48 2020] IPv6: ADDRCONF(NETDEV_UP): veth69d8ae6: link is not ready
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth69d8ae6) entered blocking state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth69d8ae6) entered forwarding state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 11(veth5297c44) entered blocking state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 11(veth5297c44) entered disabled state
[Fri Jan 17 07:22:48 2020] device veth5297c44 entered promiscuous mode
[Fri Jan 17 07:22:48 2020] IPv6: ADDRCONF(NETDEV_UP): veth5297c44: link is not ready
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 11(veth5297c44) entered blocking state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 11(veth5297c44) entered forwarding state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 8(veth69d8ae6) entered disabled state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 11(veth5297c44) entered disabled state
[Fri Jan 17 07:22:48 2020] docker_gwbridge: port 3(veth122e42b) entered disabled state
[Fri Jan 17 07:22:48 2020] vethe230f27: renamed from eth1
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 12(vetha0b790f) entered blocking state
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 12(vetha0b790f) entered disabled state
[Fri Jan 17 07:22:49 2020] device vetha0b790f entered promiscuous mode
[Fri Jan 17 07:22:49 2020] IPv6: ADDRCONF(NETDEV_UP): vetha0b790f: link is not ready
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 12(vetha0b790f) entered blocking state
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 12(vetha0b790f) entered forwarding state
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 3(veth122e42b) entered disabled state
[Fri Jan 17 07:22:49 2020] device veth122e42b left promiscuous mode
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 3(veth122e42b) entered disabled state
[Fri Jan 17 07:22:49 2020] br0: port 6(veth10) entered disabled state
[Fri Jan 17 07:22:49 2020] device veth10 left promiscuous mode
[Fri Jan 17 07:22:49 2020] br0: port 6(veth10) entered disabled state
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 12(vetha0b790f) entered disabled state
[Fri Jan 17 07:22:49 2020] eth0: renamed from vethea5f680
[Fri Jan 17 07:22:49 2020] br0: port 5(veth22) entered blocking state
[Fri Jan 17 07:22:49 2020] br0: port 5(veth22) entered forwarding state
[Fri Jan 17 07:22:49 2020] eth1: renamed from veth5f8bdfe
[Fri Jan 17 07:22:49 2020] IPv6: ADDRCONF(NETDEV_CHANGE): veth2ec5aab: link becomes ready
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 6(veth2ec5aab) entered blocking state
[Fri Jan 17 07:22:49 2020] docker_gwbridge: port 6(veth2ec5aab) entered forwarding state
[Fri Jan 17 07:22:50 2020] eth0: renamed from vethb6652df
[Fri Jan 17 07:22:50 2020] br0: port 4(veth467) entered blocking state
[Fri Jan 17 07:22:50 2020] br0: port 4(veth467) entered forwarding state
[Fri Jan 17 07:22:50 2020] eth1: renamed from vethcaadc3f
[Fri Jan 17 07:22:50 2020] IPv6: ADDRCONF(NETDEV_CHANGE): veth5297c44: link becomes ready
[Fri Jan 17 07:22:50 2020] docker_gwbridge: port 11(veth5297c44) entered blocking state
[Fri Jan 17 07:22:50 2020] docker_gwbridge: port 11(veth5297c44) entered forwarding state
[Fri Jan 17 07:22:51 2020] eth0: renamed from vethb86f26f
[Fri Jan 17 07:22:51 2020] br0: port 6(veth21) entered blocking state
[Fri Jan 17 07:22:51 2020] br0: port 6(veth21) entered forwarding state
[Fri Jan 17 07:22:51 2020] eth1: renamed from vethf0973ce
[Fri Jan 17 07:22:51 2020] IPv6: ADDRCONF(NETDEV_CHANGE): vetha0b790f: link becomes ready
[Fri Jan 17 07:22:51 2020] docker_gwbridge: port 12(vetha0b790f) entered blocking state
[Fri Jan 17 07:22:51 2020] docker_gwbridge: port 12(vetha0b790f) entered forwarding state
[Fri Jan 17 07:22:52 2020] br0: port 4(veth19) entered disabled state
[Fri Jan 17 07:22:52 2020] veth16921ae: renamed from eth0
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 7(vethd0e89d9) entered disabled state
[Fri Jan 17 07:22:52 2020] veth523fe30: renamed from eth1
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 7(vethd0e89d9) entered disabled state
[Fri Jan 17 07:22:52 2020] device vethd0e89d9 left promiscuous mode
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 7(vethd0e89d9) entered disabled state
[Fri Jan 17 07:22:52 2020] br0: port 4(veth19) entered disabled state
[Fri Jan 17 07:22:52 2020] device veth19 left promiscuous mode
[Fri Jan 17 07:22:52 2020] br0: port 4(veth19) entered disabled state
[Fri Jan 17 07:22:52 2020] vethd713dc0: renamed from eth0
[Fri Jan 17 07:22:52 2020] eth0: renamed from veth45b2428
[Fri Jan 17 07:22:52 2020] br0: port 3(veth465) entered disabled state
[Fri Jan 17 07:22:52 2020] br0: port 6(veth466) entered blocking state
[Fri Jan 17 07:22:52 2020] br0: port 6(veth466) entered forwarding state
[Fri Jan 17 07:22:52 2020] eth1: renamed from veth7af776c
[Fri Jan 17 07:22:52 2020] veth6a2ad2f: renamed from eth1
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 1(vetha11775d) entered disabled state
[Fri Jan 17 07:22:52 2020] IPv6: ADDRCONF(NETDEV_CHANGE): veth69d8ae6: link becomes ready
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 8(veth69d8ae6) entered blocking state
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 8(veth69d8ae6) entered forwarding state
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 1(vetha11775d) entered disabled state
[Fri Jan 17 07:22:52 2020] device vetha11775d left promiscuous mode
[Fri Jan 17 07:22:52 2020] docker_gwbridge: port 1(vetha11775d) entered disabled state
[Fri Jan 17 07:22:52 2020] br0: port 3(veth465) entered disabled state
[Fri Jan 17 07:22:52 2020] device veth465 left promiscuous mode
[Fri Jan 17 07:22:52 2020] br0: port 3(veth465) entered disabled state
[Fri Jan 17 07:22:57 2020] br0: port 3(veth20) entered disabled state
[Fri Jan 17 07:22:57 2020] veth277410c: renamed from eth0
[Fri Jan 17 07:22:57 2020] docker_gwbridge: port 2(vethae76420) entered disabled state
[Fri Jan 17 07:22:57 2020] vethaafcc92: renamed from eth1
[Fri Jan 17 07:22:57 2020] docker_gwbridge: port 2(vethae76420) entered disabled state
[Fri Jan 17 07:22:57 2020] device vethae76420 left promiscuous mode
[Fri Jan 17 07:22:57 2020] docker_gwbridge: port 2(vethae76420) entered disabled state
[Fri Jan 17 07:22:57 2020] br0: port 3(veth20) entered disabled state
[Fri Jan 17 07:22:57 2020] device veth20 left promiscuous mode
[Fri Jan 17 07:22:57 2020] br0: port 3(veth20) entered disabled state
with all docker containers restart at that time
ra#barn-01:~$ date
Fri Jan 17 07:27:45 EST 2020
ra#barn-01:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f0bcccb2a972 google/cadvisor:v0.33.0 "/usr/bin/cadvisor -…" 5 minutes ago Up 5 minutes (healthy) 8080/tcp monitoring_cadvisor.952iedkyjkv6up55rq7i64pc3.pyg7escdnjv4sljelcuyklu4u
9b080faa0dad stefanprodan/caddy:latest "/sbin/tini -- caddy…" 5 minutes ago Up 5 minutes monitoring_dockerd-exporter.952iedkyjkv6up55rq7i64pc3.uxl34e3mgkia4uu348x246jrv
1a33343e0515 stefanprodan/swarmprom-node-exporter:v0.16.0 "/etc/node-exporter/…" 5 minutes ago Up 5 minutes 9100/tcp monitoring_node-exporter.952iedkyjkv6up55rq7i64pc3.uisxqmds8lwhfkx6s6xy96o34
5294e5d15177 registry.speech.one/bakery-elastic:latest "/usr/local/bin/dock…" 5 minutes ago Up 5 minutes (healthy) 9200/tcp, 9300/tcp prod_elastic-1.1.wtc9rdwvy2tspe6bdcz85fe29
fdff305d583d registry.speech.one/bakery-postgres-slave:latest "/docker-entrypoint.…" 5 minutes ago Up 5 minutes 5432/tcp prod_postgres-slave-01.1.h1wbjo4nt1kjqm5x1qda99tb2
8ab7fca1d368 registry.speech.one/bakery-elastic:latest "/usr/local/bin/dock…" 5 minutes ago Up 5 minutes (healthy) 9200/tcp, 9300/tcp preprod_elastic-1.1.8myb31cnrge2oe4ylmhgvno1n
and next journalctl log
sudo journalctl -u docker | tail -n 300
Jan 17 07:15:01 barn-01 dockerd[1649]: time="2020-01-17T07:15:01.107574574-05:00" level=info msg="NetworkDB stats barn-01(96a9a06d3105) - netID:ih3ydbug5d7g4k7wvvqmt5a09 leaving:false netPeers:7 entries:90 Queue qLen:0 netMsg/s:1"
Jan 17 07:15:01 barn-01 dockerd[1649]: time="2020-01-17T07:15:01.107614209-05:00" level=info msg="NetworkDB stats barn-01(96a9a06d3105) - netID:n58omezmixa5vi6z33v4js5l2 leaving:false netPeers:4 entries:36 Queue qLen:0 netMsg/s:0"
Jan 17 07:15:01 barn-01 dockerd[1649]: time="2020-01-17T07:15:01.107646380-05:00" level=info msg="NetworkDB stats barn-01(96a9a06d3105) - netID:myn3onq0xgdgenfc5i7zhm7ai leaving:false netPeers:11 entries:32 Queue qLen:0 netMsg/s:0"
Jan 17 07:20:01 barn-01 dockerd[1649]: time="2020-01-17T07:20:01.307417453-05:00" level=info msg="NetworkDB stats barn-01(96a9a06d3105) - netID:n58omezmixa5vi6z33v4js5l2 leaving:false netPeers:4 entries:36 Queue qLen:0 netMsg/s:0"
Jan 17 07:20:01 barn-01 dockerd[1649]: time="2020-01-17T07:20:01.307521188-05:00" level=info msg="NetworkDB stats barn-01(96a9a06d3105) - netID:myn3onq0xgdgenfc5i7zhm7ai leaving:false netPeers:11 entries:32 Queue qLen:0 netMsg/s:0"
Jan 17 07:20:01 barn-01 dockerd[1649]: time="2020-01-17T07:20:01.307555326-05:00" level=info msg="NetworkDB stats barn-01(96a9a06d3105) - netID:k37odopbgoyz9cpv3uilp1h1c leaving:false netPeers:11 entries:91 Queue qLen:0 netMsg/s:0"
Jan 17 07:20:01 barn-01 dockerd[1649]: time="2020-01-17T07:20:01.307595128-05:00" level=info msg="NetworkDB stats barn-01(96a9a06d3105) - netID:ih3ydbug5d7g4k7wvvqmt5a09 leaving:false netPeers:7 entries:90 Queue qLen:0 netMsg/s:0"
Jan 17 07:22:05 barn-01 dockerd[1649]: time="2020-01-17T07:22:05.107656164-05:00" level=info msg="memberlist: Suspect db085bc444b4 has failed, no acks received"
Jan 17 07:22:07 barn-01 dockerd[1649]: time="2020-01-17T07:22:07.555466436-05:00" level=warning msg="memberlist: Refuting a suspect message (from: 716cf5e6e16c)"
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.467911262-05:00" level=error msg="heartbeat to manager {cc8p2g9w23yftc4py6rozjkie 95.213.131.210:2377} failed" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" method="(*session).heartbeat" module=node/agent node.id=952iedkyjkv6up55rq7i64pc3 session.id=da35coq44b4iwzy6godo878eq sessionID=da35coq44b4iwzy6godo878eq
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.468021277-05:00" level=error msg="agent: session failed" backoff=100ms error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node/agent node.id=952iedkyjkv6up55rq7i64pc3
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.468132554-05:00" level=info msg="parsed scheme: \"\"" module=grpc
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.468156403-05:00" level=info msg="scheme \"\" not registered, fallback to default scheme" module=grpc
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.468473713-05:00" level=info msg="ccResolverWrapper: sending update to cc: {[{141.105.66.236:2377 0 <nil>}] <nil>}" module=grpc
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.468503514-05:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.468540108-05:00" level=info msg="manager selected by agent for new session: {yz8061f18re1xpzlalej82t61 141.105.66.236:2377}" module=node/agent node.id=952iedkyjkv6up55rq7i64pc3
Jan 17 07:22:09 barn-01 dockerd[1649]: time="2020-01-17T07:22:09.468586653-05:00" level=info msg="waiting 79.132044ms before registering session" module=node/agent node.id=952iedkyjkv6up55rq7i64pc3
Jan 17 07:22:10 barn-01 dockerd[1649]: time="2020-01-17T07:22:10.107678076-05:00" level=info msg="memberlist: Suspect 716cf5e6e16c has failed, no acks received"
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.355515631-05:00" level=warning msg="memberlist: Refuting a suspect message (from: 619baf78f350)"
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.548130146-05:00" level=error msg="agent: session failed" backoff=300ms error="session initiation timed out" module=node/agent node.id=952iedkyjkv6up55rq7i64pc3
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.548253430-05:00" level=info msg="parsed scheme: \"\"" module=grpc
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.548277506-05:00" level=info msg="scheme \"\" not registered, fallback to default scheme" module=grpc
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.548589274-05:00" level=info msg="ccResolverWrapper: sending update to cc: {[{92.53.64.188:2377 0 <nil>}] <nil>}" module=grpc
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.548620610-05:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.548661631-05:00" level=info msg="manager selected by agent for new session: {l6dndjoram0ptqsf370oe4njw 92.53.64.188:2377}" module=node/agent node.id=952iedkyjkv6up55rq7i64pc3
Jan 17 07:22:14 barn-01 dockerd[1649]: time="2020-01-17T07:22:14.548717648-05:00" level=info msg="waiting 208.067334ms before registering session" module=node/agent node.id=952iedkyjkv6up55rq7i64pc3
Jan 17 07:22:15 barn-01 dockerd[1649]: time="2020-01-17T07:22:15.107515174-05:00" level=info msg="memberlist: Suspect db085bc444b4 has failed, no acks received"
Jan 17 07:22:15 barn-01 dockerd[1649]: time="2020-01-17T07:22:15.947277092-05:00" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Jan 17 07:22:15 barn-01 dockerd[1649]: time="2020-01-17T07:22:15.949633840-05:00" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Jan 17 07:22:15 barn-01 dockerd[1649]: time="2020-01-17T07:22:15.950608296-05:00" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Jan 17 07:22:15 barn-01 dockerd[1649]: time="2020-01-17T07:22:15.951657739-05:00" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Jan 17 07:22:15 barn-01 dockerd[1649]: time="2020-01-17T07:22:15.952975690-05:00" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Jan 17 07:22:16 barn-01 dockerd[1649]: time="2020-01-17T07:22:16.676033275-05:00" level=warning msg="grpc: addrConn.createTransport failed to connect to {141.105.66.236:2377 0 <nil>}. Err :connection error: desc = \"transport: authentication handshake failed: context canceled\". Reconnecting..." module=grpc
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.057434028-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.058114454-05:00" level=warning msg="rmServiceBinding f9b34a20e073bc91ed98cdb9faaa4d8442a757e75f97d1aa28a1e83273c99469 possible transient state ok:false entries:0 set:false "
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.072272034-05:00" level=warning msg="rmServiceBinding 3593899b8d90ee70846fedbbbc9996d4d5f6ab9f37978d7fdc23e724b60866cb possible transient state ok:false entries:0 set:false "
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.072350716-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.675287095-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.725847111-05:00" level=warning msg="rmServiceBinding d85c7160ff7ad71d477f17796b8e786cb716d2c5f46ceff498794a5afeb0fdca possible transient state ok:false entries:0 set:false "
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.796223977-05:00" level=warning msg="7ae0ab97d1a78f4a09e52fe911e447dd5ced1f604623d77a2ae8b97790272630 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7ae0ab97d1a78f4a09e52fe911e447dd5ced1f604623d77a2ae8b97790272630/mounts/shm, flags: 0x2: no such file or directory"
Jan 17 07:22:19 barn-01 dockerd[1649]: time="2020-01-17T07:22:19.922030007-05:00" level=warning msg="79e50c97f9ed5a7fda2eb75f7e891159c6405792af2e917f176a3d2442e63c73 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/79e50c97f9ed5a7fda2eb75f7e891159c6405792af2e917f176a3d2442e63c73/mounts/shm, flags: 0x2: no such file or directory"
Jan 17 07:22:20 barn-01 dockerd[1649]: time="2020-01-17T07:22:20.393450301-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 17 07:22:20 barn-01 dockerd[1649]: time="2020-01-17T07:22:20.451602909-05:00" level=warning msg="rmServiceBinding 938b93a3ac83e44f0eee9b0486f8c1088559a5fb34dd94afa07bdf00d9c4504e possible transient state ok:false entries:0 set:false "
Jan 17 07:22:20 barn-01 dockerd[1649]: time="2020-01-17T07:22:20.632699969-05:00" level=warning msg="5b97bff7a1978a559bbcab3a3e457edb6f60ea54ae2ae209bd23540d5dd29140 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5b97bff7a1978a559bbcab3a3e457edb6f60ea54ae2ae209bd23540d5dd29140/mounts/shm, flags: 0x2: no such file or directory"
Jan 17 07:22:20 barn-01 dockerd[1649]: time="2020-01-17T07:22:20.907207465-05:00" level=warning msg="e329ee187f0f479f50a8b1ada9a24222d41e34c86fe6819cdf471821a66ebda0 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e329ee187f0f479f50a8b1ada9a24222d41e34c86fe6819cdf471821a66ebda0/mounts/shm, flags: 0x2: no such file or directory"
Jan 17 07:22:21 barn-01 dockerd[1649]: time="2020-01-17T07:22:21.735004502-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 17 07:22:21 barn-01 dockerd[1649]: time="2020-01-17T07:22:21.735475195-05:00" level=warning msg="rmServiceBinding 4ee9822b62ef9b74a89fdcced36f818f8f1906502fc6a314e14445868f73dabf possible transient state ok:false entries:0 set:false "
Jan 17 07:22:24 barn-01 dockerd[1649]: time="2020-01-17T07:22:24.145639956-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 17 07:22:24 barn-01 dockerd[1649]: time="2020-01-17T07:22:24.146278837-05:00" level=warning msg="rmServiceBinding 7fed4a57d7ed72527ce39257f2d643724faa788243a682de879df08460cb377f possible transient state ok:false entries:0 set:false "
seems to be connected with
https://github.com/moby/moby/issues/38203
I have periodically this issue. Any ideas why? And how to avoid it ?
Update:
find discussion
https://github.com/systemd/systemd/issues/3374
and fix
https://github.com/docker/libnetwork/pull/2380
which should be in docker 19.03.6
Update: still have the problem in 19.03.6
Off recently we are facing the below issue while performing a CI/CD build from gitlab runner.
Below is the log snippet from /var/log/syslog.
pr 22 03:02:04 cirunner dockerd[1103]: time="2019-04-22T03:02:04.136857571Z" level=error msg="Handler for DELETE /v1.18/containers/runner-301e5f4d-project-786-concurrent-0-build-4 returned error: No such container: runner-301e5f4d-project-786-concurrent-0-build-4"
Apr 22 03:02:04 cirunner kernel: [1616845.656927] aufs au_opts_verify:1597:dockerd[1568]: dirperm1 breaks the protection by the permission bits on the lower branch
Apr 22 03:02:04 cirunner kernel: [1616846.186616] aufs au_opts_verify:1597:dockerd[1568]: dirperm1 breaks the protection by the permission bits on the lower branch
Apr 22 03:02:05 cirunner kernel: [1616846.383784] aufs au_opts_verify:1597:dockerd[1568]: dirperm1 breaks the protection by the permission bits on the lower branch
Apr 22 03:02:05 cirunner systemd-udevd[1187]: Could not generate persistent MAC address for veth0675b93: No such file or directory
Apr 22 03:02:05 cirunner kernel: [1616846.385245] device veth8b64bcd entered promiscuous mode
Apr 22 03:02:05 cirunner kernel: [1616846.385299] IPv6: ADDRCONF(NETDEV_UP): veth8b64bcd: link is not ready
Apr 22 03:02:05 cirunner systemd-udevd[1188]: Could not generate persistent MAC address for veth8b64bcd: No such file or directory
Apr 22 03:02:05 cirunner kernel: [1616846.788755] eth0: renamed from veth0675b93
Apr 22 03:02:05 cirunner kernel: [1616846.804716] IPv6: ADDRCONF(NETDEV_CHANGE): veth8b64bcd: link becomes ready
Apr 22 03:02:05 cirunner kernel: [1616846.804739] docker0: port 3(veth8b64bcd) entered forwarding state
Apr 22 03:02:05 cirunner kernel: [1616846.804747] docker0: port 3(veth8b64bcd) entered forwarding state
Apr 22 03:02:20 cirunner kernel: [1616861.819201] docker0: port 3(veth8b64bcd) entered forwarding state
Apr 22 03:37:13 cirunner dockerd[1103]: time="2019-04-22T03:37:13.298195303Z" level=error msg="Handler for GET
/v1.18/containers/6f6b71442b5bbc70f980cd05272c8f05d514735f39e9b73b52a094a0e87db475/json returned error: No such container: 6f6b71442b5bbc70f980cd05272c8f05d514735f39e9b73b52a094a0e87db475"
Could you please help me out what exactly is the issue and how can to trouble shoot.
Let me know if you require additional details from my side.
I currently have a docker setup working with haproxy as a load balancer directing traffic to containers running my web app. I'm trying to add SSL termination to HAProxy and have run into some trouble. When I add DEFAULT_SSL_CERT as an environment variable to my haproxy container I get these errors:
Mar 20 20:15:03 escapes-artist kernel: [3804709.167813] aufs au_opts_verify:1597:dockerd[1595]: dirperm1 breaks the protection by the permission bits on the lower branch
Mar 20 20:15:03 escapes-artist kernel: [3804709.213993] aufs au_opts_verify:1597:dockerd[1595]: dirperm1 breaks the protection by the permission bits on the lower branch
Mar 20 20:15:04 escapes-artist kernel: [3804709.674840] aufs au_opts_verify:1597:dockerd[1595]: dirperm1 breaks the protection by the permission bits on the lower branch
Mar 20 20:15:04 escapes-artist kernel: [3804709.688631] device vethebd7d1d entered promiscuous mode
Mar 20 20:15:04 escapes-artist kernel: [3804709.688767] IPv6: ADDRCONF(NETDEV_UP): vethebd7d1d: link is not ready
Mar 20 20:15:04 escapes-artist systemd-udevd: Could not generate persistent MAC address for veth5c0585c: No such file or directory
Mar 20 20:15:04 escapes-artist systemd-udevd: Could not generate persistent MAC address for vethebd7d1d: No such file or directory
Mar 20 20:15:04 escapes-artist dockerd: time="2017-03-21T02:15:04.671620998Z" level=warning msg="Your kernel does not support swap memory limit."
Mar 20 20:15:04 escapes-artist dockerd: time="2017-03-21T02:15:04.672345380Z" level=warning msg="Your kernel does not support cgroup rt period"
Mar 20 20:15:04 escapes-artist dockerd: time="2017-03-21T02:15:04.672732724Z" level=warning msg="Your kernel does not support cgroup rt runtime"
Mar 20 20:15:04 escapes-artist dockerd: time="2017-03-21T02:15:04Z" level=info msg="Firewalld running: false"
Mar 20 20:15:05 escapes-artist kernel: [3804710.392546] eth0: renamed from veth5c0585c
Mar 20 20:15:05 escapes-artist kernel: [3804710.395273] IPv6: ADDRCONF(NETDEV_CHANGE): vethebd7d1d: link becomes ready
Mar 20 20:15:05 escapes-artist kernel: [3804710.395303] br-5c6735a37ece: port 3(vethebd7d1d) entered forwarding state
Mar 20 20:15:05 escapes-artist kernel: [3804710.395313] br-5c6735a37ece: port 3(vethebd7d1d) entered forwarding state
Mar 20 20:15:05 escapes-artist kernel: [3804711.072047] br-5c6735a37ece: port 2(vethbaf33bd) entered forwarding state
Mar 20 20:15:08 escapes-artist kernel: [3804713.819317] haproxy[29684]: segfault at 7f560000003b ip 00007f56f6ac74bb sp 00007ffe45011290 error 4 in libcrypto.so.1.0.0[7f56f69ce000+3f3000]
Mar 20 20:15:11 escapes-artist sshd: Received disconnect from 122.194.229.7 port 21903:11: [preauth]
Mar 20 20:15:11 escapes-artist sshd: Disconnected from 122.194.229.7 port 21903 [preauth]
Mar 20 20:15:13 escapes-artist kernel: [3804718.789238] haproxy[29686]: segfault at 7fbb0000003b ip 00007fbb747b74bb sp 00007ffc944fcc10 error 4 in libcrypto.so.1.0.0[7fbb746be000+3f3000]
Mar 20 20:15:17 escapes-artist kernel: [3804722.944073] br-5c6735a37ece: port 1(veth610d1f4) entered forwarding state
Mar 20 20:15:18 escapes-artist kernel: [3804723.790663] haproxy[29688]: segfault at 7ff10000003b ip 00007ff1ad6004bb sp 00007fffa6f03cb0 error 4 in libcrypto.so.1.0.0[7ff1ad507000+3f3000]
Mar 20 20:15:20 escapes-artist kernel: [3804725.408060] br-5c6735a37ece: port 3(vethebd7d1d) entered forwarding state
Mar 20 20:15:23 escapes-artist kernel: [3804728.792134] haproxy[29690]: segfault at 7f130000003b ip 00007f13210c54bb sp 00007ffcbe3f7670 error 4 in libcrypto.so.1.0.0[7f1320fcc000+3f3000]
Mar 20 20:15:28 escapes-artist kernel: [3804733.823940] haproxy[29692]: segfault at 7f500000003b ip 00007f500b9d94bb sp 00007ffe6d044f10 error 4 in libcrypto.so.1.0.0[7f500b8e0000+3f3000]
Mar 20 20:15:33 escapes-artist kernel: [3804738.780797] haproxy[29694]: segfault at 7f000000003b ip 00007f00310124bb sp 00007fffd6e979b0 error 4 in libcrypto.so.1.0.0[7f0030f19000+3f3000]
Does anyone know how to fix this? I've experimented for hours trying different formats for the cert file, environment variables, etc. and can't seem to figure anything out. Here is the docker-compose.yml file I'm using:
version: '2'
services:
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: docker
MYSQL_USER: admin
MYSQL_PASSWORD: password
volumes:
- /storage/docker/mysql-datadir:/var/lib/mysql
ports:
- 3306:3306
web:
image: myimage
restart: always
depends_on:
- db
volumes:
- /home/docker/persistent/media/:/home/docker/code/media/
lb:
image: dockercloud/haproxy
links:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/haproxy/certs:/certs
environment:
STATS_AUTH: admin:password
RSYSLOG_DESTINATION: logs5.papertrailapp.com:41747
DEFAULT_SSL_CERT: (I've tried both pasting cert here directly and a path to cert)
ports:
- 80:80
- 443:443
- 1936:1936
I have Letsencrypt setup on the host machine to autorenew. The cert that I've been trying to use is a combination of the privkey.pem and fullchian.pem. I've tried concatenating them, using awk 1 ORS='\\n' like the dockercloud/haproxy docs suggest, and just about every other configuration I can think of. Any help would be greatly appreciated.
Also, if I use CERT_FOLDER: /certs/ instead of DEFAULT_SSL_CERT and have my certificate stored in /certs/cert0.pem I get this error instead...
Mar 20 21:19:38 escapes-artist dockerd: time="2017-03-21T03:19:38.840340234Z" level=error msg="containerd: deleting container" error="exit status 1: \"container ce6c0b6df31419691b6593be6744d01c8ccecf5f38851106aa4bb8fac915a63a does not exist\\none or more of the container deletions failed\\n\""
Mar 20 21:19:38 escapes-artist kernel: [3808584.302038] br-5c6735a37ece: port 3(veth8b1ea8e) entered disabled state
Mar 20 21:19:38 escapes-artist kernel: [3808584.302192] veth0bcd06c: renamed from eth0
Mar 20 21:19:38 escapes-artist kernel: [3808584.320863] br-5c6735a37ece: port 3(veth8b1ea8e) entered disabled state
Mar 20 21:19:38 escapes-artist kernel: [3808584.321869] device veth8b1ea8e left promiscuous mode
Mar 20 21:19:38 escapes-artist kernel: [3808584.321874] br-5c6735a37ece: port 3(veth8b1ea8e) entered disabled state
Mar 20 21:19:39 escapes-artist dockerd: time="2017-03-21T03:19:39.055316431Z" level=error msg="Handler for GET /v1.25/exec/c79e3c9b77f0c84d849cc641a425950d55fcbb22bf566922d3fd12e6a0e12e07/json returned error: Container ce6c0b6df31419691b6593be6744d01c8ccecf5f38851106aa4bb8fac915a63a is not running: Exited (0) Less than a second ago"
Mar 20 21:19:39 escapes-artist kernel: [3808584.964578] aufs au_opts_verify:1597:dockerd[23058]: dirperm1 breaks the protection by the permission bits on the lower branch
Mar 20 21:19:39 escapes-artist kernel: [3808585.005699] aufs au_opts_verify:1597:dockerd[23058]: dirperm1 breaks the protection by the permission bits on the lower branch
Mar 20 21:19:40 escapes-artist kernel: [3808585.489799] aufs au_opts_verify:1597:dockerd[1595]: dirperm1 breaks the protection by the permission bits on the lower branch
Mar 20 21:19:40 escapes-artist kernel: [3808585.500609] device veth24d6316 entered promiscuous mode
Mar 20 21:19:40 escapes-artist systemd-udevd: Could not generate persistent MAC address for veth24d6316: No such file or directory
Mar 20 21:19:40 escapes-artist kernel: [3808585.505055] IPv6: ADDRCONF(NETDEV_UP): veth24d6316: link is not ready
Mar 20 21:19:40 escapes-artist systemd-udevd: Could not generate persistent MAC address for vethedaad7c: No such file or directory
Mar 20 21:19:40 escapes-artist dockerd: time="2017-03-21T03:19:40.259076690Z" level=warning msg="Your kernel does not support swap memory limit."
Mar 20 21:19:40 escapes-artist dockerd: time="2017-03-21T03:19:40.260183880Z" level=warning msg="Your kernel does not support cgroup rt period"
Mar 20 21:19:40 escapes-artist dockerd: time="2017-03-21T03:19:40.260663645Z" level=warning msg="Your kernel does not support cgroup rt runtime"
Mar 20 21:19:40 escapes-artist dockerd: time="2017-03-21T03:19:40Z" level=info msg="Firewalld running: false"
Mar 20 21:19:40 escapes-artist kernel: [3808585.904671] eth0: renamed from vethedaad7c
Mar 20 21:19:40 escapes-artist kernel: [3808585.918744] IPv6: ADDRCONF(NETDEV_CHANGE): veth24d6316: link becomes ready
Mar 20 21:19:40 escapes-artist kernel: [3808585.919040] br-5c6735a37ece: port 3(veth24d6316) entered forwarding state
Mar 20 21:19:40 escapes-artist kernel: [3808585.919058] br-5c6735a37ece: port 3(veth24d6316) entered forwarding state
Mar 20 21:19:44 escapes-artist kernel: [3808589.585674] haproxy[32235]: segfault at 341 ip 0000000000000341 sp 00007ffe732fe5b8 error 14 in haproxy[55f6998b1000+d1000]
Mar 20 21:19:49 escapes-artist kernel: [3808594.704226] haproxy[32237]: segfault at 341 ip 0000000000000341 sp 00007ffcb4d1aa08 error 14 in haproxy[563827d10000+d1000]
Mar 20 21:19:54 escapes-artist kernel: [3808599.669540] haproxy[32239]: segfault at 341 ip 0000000000000341 sp 00007ffd1e8bb1b8 error 14 in haproxy[562d926fa000+d1000]
Mar 20 21:19:55 escapes-artist kernel: [3808600.928110] br-5c6735a37ece: port 3(veth24d6316) entered forwarding state
Mar 20 21:19:59 escapes-artist kernel: [3808604.602704] haproxy[32241]: segfault at 341 ip 0000000000000341 sp 00007fff142d0898 error 14 in haproxy[5592e3a63000+d1000]
Ok, figured out what the issue was. the dockercloud/haproxy image creates cert files and puts them in /certs/. I had mounted a volume into /certs/, which was messing things up. I moved my mounted volume to /shared-certs/ and everything works!