I've installed Docker using Snap. Recently running containers have been getting stopped on their own. This happens say 2-3 times in the space of ~8-10 hours. I've been trying to find a root cause without much success. Relevant information below. Let me know if I can provide more information to help.
$ docker --version
Docker version 19.03.13, build cd8016b6bc
$ snap --version
snap 2.51.4
snapd 2.51.4
series 16
ubuntu 18.04
kernel 5.4.0-81-generic
Docker daemon.json
$ cat /var/snap/docker/current/config/daemon.json
{
"log-level": "error",
"storage-driver": "aufs",
"bip": "172.28.0.1/24"
}
$ dmesg -T
[Tue Sep 14 20:31:37 2021] aufs aufs_fill_super:918:mount[18200]: no arg
[Tue Sep 14 20:31:37 2021] overlayfs: missing 'lowerdir'
[Tue Sep 14 20:31:43 2021] br-6c6facc1a891: port 5(veth4c212a4) entered disabled state
[Tue Sep 14 20:31:43 2021] device veth4c212a4 left promiscuous mode
[Tue Sep 14 20:31:43 2021] br-6c6facc1a891: port 5(veth4c212a4) entered disabled state
[Tue Sep 14 20:31:45 2021] br-6c6facc1a891: port 1(veth1c95aae) entered disabled state
[Tue Sep 14 20:31:45 2021] device veth1c95aae left promiscuous mode
[Tue Sep 14 20:31:45 2021] br-6c6facc1a891: port 1(veth1c95aae) entered disabled state
[Tue Sep 14 20:31:45 2021] br-6c6facc1a891: port 4(veth1dfd80e) entered disabled state
[Tue Sep 14 20:31:45 2021] device veth1dfd80e left promiscuous mode
[Tue Sep 14 20:31:45 2021] br-6c6facc1a891: port 4(veth1dfd80e) entered disabled state
[Tue Sep 14 20:31:46 2021] br-6c6facc1a891: port 2(veth8e48cf4) entered disabled state
[Tue Sep 14 20:31:46 2021] device veth8e48cf4 left promiscuous mode
[Tue Sep 14 20:31:46 2021] br-6c6facc1a891: port 2(veth8e48cf4) entered disabled state
[Tue Sep 14 20:31:46 2021] br-6c6facc1a891: port 3(veth534c1d3) entered disabled state
[Tue Sep 14 20:31:46 2021] device veth534c1d3 left promiscuous mode
[Tue Sep 14 20:31:46 2021] br-6c6facc1a891: port 3(veth534c1d3) entered disabled state
[Tue Sep 14 20:31:47 2021] br-6c6facc1a891: port 6(veth316fdd7) entered disabled state
[Tue Sep 14 20:31:47 2021] device veth316fdd7 left promiscuous mode
Note the difference in timestamp between Docker logs, below and dmesg, above.
The Docker logs appear to be from previous time I restarted containers using docker-compose.
$ sudo snap logs docker
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.783211664+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/af7c138e4399d3bb8a5615ec05fd1ba90bc7e98391b468067374a020d792906d.sock: connect: connection refused" id=2b9e8a563dad5f61e2ad525c5d590804c33c6cd323d580fe365c170fd5a68a8a namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.860328985+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/281fedfbf5b11053d28853b6ad6175009903b338995d5faa0862e8f1ab0e3b10.sock: connect: connection refused" id=43449775462debc8336ab1bc63e2020e8a554ee25db31befa561dc790c76e1ac namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.878788076+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/ff2c9cacd1ef1ac083f93e4823f5d0fa4146593f2b6508a098b22270b48507b4.sock: connect: connection refused" id=4d91c4451a011d87b2d21fe7d74e3c4e7ffa20f2df69076f36567b5389597637 namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.906212149+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/017a3907df26803a221be66a2a0ac25e43a994d26432cba30f6c81c078ad62fa.sock: connect: connection refused" id=79e0d419a1d82f83dd81898a02fa1161b909ae88c1e46575a1bec894df31a482 namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.919895281+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/47e9b56ce80402793038edf72fe64b44a05f659371c212361e47d1463ad269ae.sock: connect: connection refused" id=99aba37c4f1521854130601f19afeb196231a924effba1cfcfb7da90b5703a86 namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.931562562+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/58d5711ddbcc9faf6a4d8d7d0433d4254d5069c9e559d61eb1551f80d193a3eb.sock: connect: connection refused" id=a09358b02332b18dfa99b4dc99edf4b1ebac80671c29b91946875a53e1b8bd7e namespace=moby
2021-09-14T15:01:19Z docker.dockerd[27385]: time="2021-09-14T20:31:19.949511272+05:30" level=error msg="connecting to shim" error="dial unix \x00/containerd-shim/67de51fdf40350feb583255a5e703c719745ef9123a8a47dad72df075c12f953.sock: connect: connection refused" id=ee145dfe0eb44fde323a431b191a62aa47ad265c438239f7243c684e10713042 namespace=moby
2021-09-14T15:01:24Z docker.dockerd[27385]: time="2021-09-14T20:31:24.671615174+05:30" level=error msg="Force shutdown daemon"
2021-09-14T15:01:25Z systemd[1]: Stopped Service for snap application docker.dockerd.
2021-09-14T15:01:37Z systemd[1]: Started Service for snap application docker.dockerd.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
On one of my AWS server I have manually started a detached Docker container running willnorris/imageproxy. With no warning, it seems to go down after a few days, for no apparent (external) reason. I checked the container logs and the syslog and found nothing.
How can I find out what goes wrong (this happens every time)?
This is how I start it:
ubuntu#local:~ $ ssh ubuntu#my_aws_box
ubuntu#aws_box:~ $ docker run -dp 8081:8080 willnorris/imageproxy -addr 0.0.0.0:8080
Typically, this is what I do when it seems to have crashed:
ubuntu#aws_box:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de63701bbc82 willnorris/imageproxy "/app/imageproxy -ad…" 10 days ago Exited (2) 7 days ago frosty_shockley
ubuntu#aws_box:~$ docker logs de63701bbc82
imageproxy listening on 0.0.0.0:8080
2021/08/04 00:46:42 error copying response: write tcp 172.17.0.2:8080->172.17.0.1:38568: write: broken pipe
2021/08/04 00:46:42 error copying response: write tcp 172.17.0.2:8080->172.17.0.1:38572: write: broken pipe
2021/08/04 01:29:18 invalid request URL: malformed URL "/jars": too few path segments
2021/08/04 01:29:18 invalid request URL: malformed URL "/service/extdirect": must provide absolute remote URL
2021/08/04 11:09:49 invalid request URL: malformed URL "/jars": too few path segments
2021/08/04 11:09:49 invalid request URL: malformed URL "/service/extdirect": must provide absolute remote URL
2021/08/04 13:04:33 error copying response: write tcp 172.17.0.2:8080->172.17.0.1:41036: write: broken pipe
As you can see, the logs tell me nothing of the crash and the only real thing I have to go by is the exit status: Exited (2) 7 days ago .
As this exit seemed to originate outside of the container/Docker, I needed to find the right logs. A linked-to question (which essentially makes this a dupe) hinted to checking out journald on unix systems. Doing journald -u docker (essentially grepping the log for docker) showed that the Docker container was killed on August 6:
Aug 06 06:06:49 ip-192-168-3-117 dockerd[1045]: time="2021-08-06T06:06:49.544825959Z" level=info msg="Processing signal 'terminated'"
Aug 06 06:06:49 ip-192-168-3-117 dockerd[1045]: time="2021-08-06T06:06:49.836744355Z" level=info msg="ignoring event" container=de63701bbc828ca8bfcb895eeccae62bbda602d3be0508ceaf20fe76d7d018d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 06 06:06:49 ip-192-168-3-117 containerd[885]: time="2021-08-06T06:06:49.837480333Z" level=info msg="shim disconnected" id=de63701bbc828ca8bfcb895eeccae62bbda602d3be0508ceaf20fe76d7d018d5
Aug 06 06:06:49 ip-192-168-3-117 containerd[885]: time="2021-08-06T06:06:49.840764380Z" level=warning msg="cleaning up after shim disconnected" id=de63701bbc828ca8bfcb895eeccae62bbda602d3be0508ceaf20fe76d7d018d5 namespace=moby
Aug 06 06:06:49 ip-192-168-3-117 containerd[885]: time="2021-08-06T06:06:49.840787254Z" level=info msg="cleaning up dead shim"
Aug 06 06:06:49 ip-192-168-3-117 dockerd[1045]: time="2021-08-06T06:06:49.868008333Z" level=info msg="ignoring event" container=709e057de026ff11f783121c839c56938ea79dcd5965be1546cd6931beb5a903 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 06 06:06:49 ip-192-168-3-117 dockerd[1045]: time="2021-08-06T06:06:49.868091089Z" level=info msg="ignoring event" container=9219e652436aae8016145bf3e0681ff1bb7046f230338d8ab79f9ced9532e342 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 06 06:06:49 ip-192-168-3-117 containerd[885]: time="2021-08-06T06:06:49.868916377Z" level=info msg="shim disconnected" id=9219e652436aae8016145bf3e0681ff1bb7046f230338d8ab79f9ced9532e342
A
Aug 06 06:06:51 ip-192-168-3-117 dockerd[1045]: time="2021-08-06T06:06:51.068939160Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Aug 06 06:06:51 ip-192-168-3-117 dockerd[1045]: time="2021-08-06T06:06:51.069763813Z" level=info msg="Daemon shutdown complete"
Aug 06 06:06:51 ip-192-168-3-117 dockerd[1045]: time="2021-08-06T06:06:51.070022944Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd names
pace=plugins.moby
Aug 06 06:06:51 ip-192-168-3-117 systemd[1]: Stopped Docker Application Container Engine.
Aug 06 06:06:51 ip-192-168-3-117 systemd[1]: Starting Docker Application Container Engine...
Now, what killed it? To find that out, I needed to not filter out the preceding events, so I just did journald | grep 'Aug 06' and found these lines preceding the previous ones:
Aug 06 05:56:01 ip-192-168-3-117 systemd[1]: Starting Daily apt download activities...
Aug 06 05:56:11 ip-192-168-3-117 systemd[1]: Started Daily apt download activities.
Aug 06 06:06:39 ip-192-168-3-117 systemd[1]: Starting Daily apt upgrade and clean activities...
Aug 06 06:06:48 ip-192-168-3-117 systemd[1]: Reloading.
Aug 06 06:06:48 ip-192-168-3-117 systemd[1]: Starting Message of the Day...
Aug 06 06:06:48 ip-192-168-3-117 systemd[1]: Reloading.
Aug 06 06:06:49 ip-192-168-3-117 systemd[1]: Reloading.
Aug 06 06:06:49 ip-192-168-3-117 systemd[1]: Stopping Docker Application Container Engine...
So this was basically caused by a cron job that upgraded the Docker daemon and killed the old one! Since I did not have --restart=always, the container was not restarted after the daemon had respawned.
I am trying to run multiple squid containers whose configs are built at container run time. Each container needs to route traffic independently from the other. Aside from where traffic is forwarded on, the configs are the same.
I can get a single squid container running and doing what I need it to with no problems.
docker run -v /var/log/squid:/var/log/squid -p 3133-3138:3133-3138 my_images/squid_test:version1.0
Trying to run a second container with:
docker run -v /var/log/squid:/var/log/squid -p 4133-4138:3133-3138 my_images/squid_test:version1.0
This instantly spits out: Aborted (core dumped)
I have one other container running on port 9000 but thats it.
This is a syslog dump from the host at the time the second container launch is attempted
Jun 18 04:45:17 dockerdevr1 kernel: [84821.356170] docker0: port 3(veth89ab0c1) entered blocking state
Jun 18 04:45:17 dockerdevr1 kernel: [84821.356172] docker0: port 3(veth89ab0c1) entered disabled state
Jun 18 04:45:17 dockerdevr1 kernel: [84821.356209] device veth89ab0c1 entered promiscuous mode
Jun 18 04:45:17 dockerdevr1 kernel: [84821.356252] IPv6: ADDRCONF(NETDEV_UP): veth89ab0c1: link is not ready
Jun 18 04:45:17 dockerdevr1 systemd-networkd[765]: veth89ab0c1: Link UP
Jun 18 04:45:17 dockerdevr1 networkd-dispatcher[1048]: WARNING:Unknown index 421 seen, reloading interface list
Jun 18 04:45:17 dockerdevr1 systemd-udevd[25899]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 18 04:45:17 dockerdevr1 systemd-udevd[25900]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 18 04:45:17 dockerdevr1 systemd-udevd[25899]: Could not generate persistent MAC address for vethb0dffb8: No such file or directory
Jun 18 04:45:17 dockerdevr1 systemd-udevd[25900]: Could not generate persistent MAC address for veth89ab0c1: No such file or directory
Jun 18 04:45:17 dockerdevr1 containerd[1119]: time="2020-06-18T04:45:17.567627817Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/85f0acae4a948ed16b3b29988291b5df3d052b10d1965f1198745966e63c3732/shim.sock" debug=false pid=25920
Jun 18 04:45:17 dockerdevr1 kernel: [84821.841905] eth0: renamed from vethb0dffb8
Jun 18 04:45:17 dockerdevr1 kernel: [84821.858172] IPv6: ADDRCONF(NETDEV_CHANGE): veth89ab0c1: link becomes ready
Jun 18 04:45:17 dockerdevr1 kernel: [84821.858263] docker0: port 3(veth89ab0c1) entered blocking state
Jun 18 04:45:17 dockerdevr1 kernel: [84821.858265] docker0: port 3(veth89ab0c1) entered forwarding state
Jun 18 04:45:17 dockerdevr1 systemd-networkd[765]: veth89ab0c1: Gained carrier
Jun 18 04:45:19 dockerdevr1 systemd-networkd[765]: veth89ab0c1: Gained IPv6LL
Jun 18 04:45:19 dockerdevr1 containerd[1119]: time="2020-06-18T04:45:19.221654620Z" level=info msg="shim reaped" id=85f0acae4a948ed16b3b29988291b5df3d052b10d1965f1198745966e63c3732
Jun 18 04:45:19 dockerdevr1 dockerd[1171]: time="2020-06-18T04:45:19.232623257Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 18 04:45:19 dockerdevr1 systemd-networkd[765]: veth89ab0c1: Lost carrier
Jun 18 04:45:19 dockerdevr1 kernel: [84823.251203] docker0: port 3(veth89ab0c1) entered disabled state
Jun 18 04:45:19 dockerdevr1 kernel: [84823.254402] vethb0dffb8: renamed from eth0
Jun 18 04:45:19 dockerdevr1 systemd-networkd[765]: veth89ab0c1: Link DOWN
Jun 18 04:45:19 dockerdevr1 kernel: [84823.293507] docker0: port 3(veth89ab0c1) entered disabled state
Jun 18 04:45:19 dockerdevr1 kernel: [84823.294577] device veth89ab0c1 left promiscuous mode
Jun 18 04:45:19 dockerdevr1 kernel: [84823.294580] docker0: port 3(veth89ab0c1) entered disabled state
Jun 18 04:45:19 dockerdevr1 networkd-dispatcher[1048]: WARNING:Unknown index 420 seen, reloading interface list
Jun 18 04:45:19 dockerdevr1 networkd-dispatcher[1048]: ERROR:Unknown interface index 420 seen even after reload
Jun 18 04:45:19 dockerdevr1 systemd-udevd[26041]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 18 04:45:19 dockerdevr1 systemd-udevd[26041]: link_config: could not get ethtool features for vethb0dffb8
Jun 18 04:45:19 dockerdevr1 systemd-udevd[26041]: Could not set offload features of vethb0dffb8: No such device
Has anyone tried something similar to this? I know I can get multiple nginx containers running on different ports. Any insight would be greatly appreciated!
hello team when i create service in docker swarm , then with instantly containers are existing with 0 code below are logs
Feb 28 07:32:36 ip-172-31-18-123 kernel: IPVS: Creating netns size=2040 id=417
Feb 28 07:32:36 ip-172-31-18-123 NetworkManager[528]: <info> [1519803156.2518] device (vethb31b4b5): link connected
Feb 28 07:32:36 ip-172-31-18-123 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb31b4b5: link becomes ready
Feb 28 07:32:36 ip-172-31-18-123 kernel: docker0: port 3(vethb31b4b5) entered blocking state
Feb 28 07:32:36 ip-172-31-18-123 kernel: docker0: port 3(vethb31b4b5) entered forwarding state
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.312181706Z" level=warning msg="unknown container" container=4ac8ae6d6f542a7a7b361f7249fd749eed9b6489155f3f051b0b4f5bbbb3d0b2 module=libcontainerd namespace=plugins.moby
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.330172710Z" level=warning msg="unknown container" container=4ac8ae6d6f542a7a7b361f7249fd749eed9b6489155f3f051b0b4f5bbbb3d0b2 module=libcontainerd namespace=plugins.moby
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.361597892Z" level=warning msg="unknown container" container=4ac8ae6d6f542a7a7b361f7249fd749eed9b6489155f3f051b0b4f5bbbb3d0b2 module=libcontainerd namespace=plugins.moby
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36Z" level=info msg="shim reaped" id=4ac8ae6d6f542a7a7b361f7249fd749eed9b6489155f3f051b0b4f5bbbb3d0b2 module="containerd/tasks"
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.402480985Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.402535187Z" level=info msg="ignoring event" module=libcontainerd namespace=plugins.moby topic=/tasks/delete type="*events.TaskDelete"
Feb 28 07:32:36 ip-172-31-18-123 kernel: docker0: port 3(vethb31b4b5) entered disabled state
Feb 28 07:32:36 ip-172-31-18-123 NetworkManager[528]: <info> [1519803156.4258] manager: (vethd1102f2): new Veth device (/org/freedesktop/NetworkManager/Devices/4335)
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.425967110Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.425987752Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.426011251Z" level=error msg="pulling image failed" error="pull access denied for ubunut, repository does not exist or may require 'docker login'" module=node/agent/taskmanager node.id=6vd6hq8l81ztlpaih0xwn6y0v service.id=8yfn38lxo6ej2244vqbnx4m0k task.id=szdix3oeko8b8e7cyg0pwpjea
Feb 28 07:32:36 ip-172-31-18-123 dockerd: time="2018-02-28T07:32:36.426589500Z" level=erro
run any foreground process in your docker image, then you can able to create service
I am trying to set docker up to connect all containers to my own manually created bridge (br0), I don't want docker to create or edit anything in my bridge, because I have other services which uses and depends on my bridge (like OpenVPN) therefore I prefer to create the bridge using my own bash script.
The problem comes when I start docker service, docker changes my bridge IP address from what I want (192.168.1.10) to something else address(169.254.x.x)!!!
My Docker version 1.12.1, build 23cf638
The steps I did
Bridge creation:
sudo brctl addbr br0
sudo brctl addif br0 eth0
sudo ip addr del 192.168.1.10/24 dev eth0
sudo ip addr add 192.168.1.10/24 dev br0
sudo ip route add default via 192.168.1.1 dev br0
I also deleted the default docker0 brdige.
Tell docker to use my br0 instead of the default docker0:
Passing -b br0 parameter to dockerd.service starting script to tell docker that I want him to use my br0:
sudo vi /etc/systemd/system/docker.service.d/overlay.conf
I edited ExecStart to be like this:
ExecStart=/usr/bin/dockerd --storage-driver=overlay -H fd:// -b=br0
and then:
sudo systemctl daemon-reload
sudo systemctl restart docker
And now when I check my br0 IP, it is NOT 192.168.1.10 any more, it is back to 172.17.x.x, and when I try to change it now manually back to 192.168.1.10, the interfaces in containers keeps using 169.254.x.x instead of the IP I want.
P.s. when I check where are the interfaces of my containers: brctl show, they are really in my br0 (that means docker accepted -b br0 paramter, but it just ignores or override my intended IP address).
Could some one help me please to over come that problem? it looks for me like a bug maybe. I just want docker to use my br0 with the intended IP address 192.168.1.10.
My need is that all my containers get and IP address in the range I want.
Thanks in advance.
Edited:
My /var/log/daemon.log
Oct 10 20:41:12 raspberrypi systemd[1]: Stopping Docker Application Container Engine...
Oct 10 20:41:12 raspberrypi dockerd[976]: time="2016-10-10T20:41:12.067551389Z" level=info msg="Processing signal 'terminated'"
Oct 10 20:41:12 raspberrypi dockerd[976]: time="2016-10-10T20:41:12.128388194Z" level=info msg="stopping containerd after receiving terminated"
Oct 10 20:41:13 raspberrypi systemd[1]: Stopped Docker Application Container Engine.
Oct 10 20:41:13 raspberrypi systemd[1]: Stopping Docker Socket for the API.
Oct 10 20:41:13 raspberrypi systemd[1]: Closed Docker Socket for the API.
Oct 10 20:41:13 raspberrypi systemd[1]: Stopped Docker Application Container Engine.
Oct 10 20:41:50 raspberrypi avahi-daemon[440]: Withdrawing address record for 169.254.124.135 on br0.
Oct 10 20:41:50 raspberrypi dhcpcd[698]: br0: removing IP address 169.254.124.135/16
Oct 10 20:41:50 raspberrypi avahi-daemon[440]: Leaving mDNS multicast group on interface br0.IPv4 with address 169.254.124.135.
Oct 10 20:41:50 raspberrypi avahi-daemon[440]: Interface br0.IPv4 no longer relevant for mDNS.
Oct 10 20:41:50 raspberrypi dhcpcd[698]: br0: deleting route to 169.254.0.0/16
Oct 10 20:41:52 raspberrypi ntpd[723]: Deleting interface #7 br0, 169.254.124.135#123, interface stats: received=0, sent=0, dropped=0, active_time=516 secs
Oct 10 20:41:52 raspberrypi ntpd[723]: peers refreshed
Oct 10 20:42:58 raspberrypi avahi-daemon[440]: Joining mDNS multicast group on interface br0.IPv4 with address 192.168.1.19.
Oct 10 20:42:58 raspberrypi avahi-daemon[440]: New relevant interface br0.IPv4 for mDNS.
Oct 10 20:42:58 raspberrypi avahi-daemon[440]: Registering new address record for 192.168.1.19 on br0.IPv4.
Oct 10 20:43:00 raspberrypi ntpd[723]: Listen normally on 8 br0 192.168.1.19 UDP 123
Oct 10 20:43:00 raspberrypi ntpd[723]: peers refreshed
Oct 10 20:43:15 raspberrypi systemd[1]: getty#tty1.service has no holdoff time, scheduling restart.
Oct 10 20:43:15 raspberrypi systemd[1]: Stopping Getty on tty1...
Oct 10 20:43:15 raspberrypi systemd[1]: Starting Getty on tty1...
Oct 10 20:43:15 raspberrypi systemd[1]: Started Getty on tty1.
Oct 10 20:43:21 raspberrypi systemd[1]: getty#tty1.service has no holdoff time, scheduling restart.
Oct 10 20:43:21 raspberrypi systemd[1]: Stopping Getty on tty1...
Oct 10 20:43:21 raspberrypi systemd[1]: Starting Getty on tty1...
Oct 10 20:43:21 raspberrypi systemd[1]: Started Getty on tty1.
Oct 10 20:44:31 raspberrypi systemd[1]: Starting Docker Socket for the API.
Oct 10 20:44:31 raspberrypi systemd[1]: Listening on Docker Socket for the API.
Oct 10 20:44:31 raspberrypi systemd[1]: Starting Docker Application Container Engine...
Oct 10 20:44:31 raspberrypi dockerd[1536]: time="2016-10-10T20:44:31.887581128Z" level=info msg="libcontainerd: new containerd process, pid: 1543"
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.903109872Z" level=info msg="[graphdriver] using prior storage driver \"overlay\""
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.950908429Z" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.951611338Z" level=warning msg="Your kernel does not support swap memory limit."
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.951800086Z" level=warning msg="Your kernel does not support kernel memory limit."
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.951906179Z" level=warning msg="Your kernel does not support cgroup cfs period"
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.951993522Z" level=warning msg="Your kernel does not support cgroup cfs quotas"
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.952173520Z" level=warning msg="Unable to find cpuset cgroup in mounts"
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.952372059Z" level=warning msg="mountpoint for pids not found"
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.953406319Z" level=info msg="Loading containers: start."
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.970612440Z" level=info msg="Firewalld running: false"
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.953406319Z" level=info msg="Loading containers: start."
Oct 10 20:44:32 raspberrypi dockerd[1536]: time="2016-10-10T20:44:32.970612440Z" level=info msg="Firewalld running: false"
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Withdrawing address record for 192.168.1.19 on br0.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Leaving mDNS multicast group on interface br0.IPv4 with address 192.168.1.19.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Interface br0.IPv4 no longer relevant for mDNS.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Joining mDNS multicast group on interface br0.IPv4 with address 169.254.124.135.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: New relevant interface br0.IPv4 for mDNS.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Registering new address record for 169.254.124.135 on br0.IPv4.
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.715576231Z" level=info msg="Loading containers: done."
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.715837582Z" level=info msg="Daemon has completed initialization"
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.715921435Z" level=info msg="Docker daemon" commit=23cf638 graphdriver=overlay version=1.12.1
Oct 10 20:44:33 raspberrypi systemd[1]: Started Docker Application Container Engine.
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.754984356Z" level=info msg="API listen on /var/run/docker.sock"
Oct 10 20:44:34 raspberrypi ntpd[723]: Listen normally on 9 br0 169.254.124.135 UDP 123
Oct 10 20:44:34 raspberrypi ntpd[723]: Deleting interface #8 br0, 192.168.1.19#123, interface stats: received=0, sent=0, dropped=0, active_time=94 secs
Oct 10 20:44:34 raspberrypi ntpd[723]: peers refreshed
The interesting part is the last part (I recopied it here bellow):
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Withdrawing address record for 192.168.1.19 on br0.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Leaving mDNS multicast group on interface br0.IPv4 with address 192.168.1.19.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Interface br0.IPv4 no longer relevant for mDNS.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Joining mDNS multicast group on interface br0.IPv4 with address 169.254.124.135.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: New relevant interface br0.IPv4 for mDNS.
Oct 10 20:44:33 raspberrypi avahi-daemon[440]: Registering new address record for 169.254.124.135 on br0.IPv4.
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.715576231Z" level=info msg="Loading containers: done."
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.715837582Z" level=info msg="Daemon has completed initialization"
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.715921435Z" level=info msg="Docker daemon" commit=23cf638 graphdriver=overlay version=1.12.1
Oct 10 20:44:33 raspberrypi systemd[1]: Started Docker Application Container Engine.
Oct 10 20:44:33 raspberrypi dockerd[1536]: time="2016-10-10T20:44:33.754984356Z" level=info msg="API listen on /var/run/docker.sock"
Oct 10 20:44:34 raspberrypi ntpd[723]: Listen normally on 9 br0 169.254.124.135 UDP 123
Oct 10 20:44:34 raspberrypi ntpd[723]: Deleting interface #8 br0, 192.168.1.19#123, interface stats: received=0, sent=0, dropped=0, active_time=94
Once the docker container is running the network configuration is not editable. Try running your docker container with --bip=CIDR and set your bridge ip manually. For more info follow here.
I am using Ubuntu 16.04 with docker 1.11.2. I have configured systemd to automatically restart docker daemon. When I kill the docker daemon, docker daemon restarts, but container will not even it has RestartPolicy set to always. From the logs I can read that it failed to create directory because it exists. I personally think that it related to stopping containerd.
Any help would be appreciated.
Aug 25 19:20:19 api-31 systemd[1]: docker.service: Main process exited, code=killed, status=9/KILL
Aug 25 19:20:19 api-31 docker[17617]: time="2016-08-25T19:20:19Z" level=info msg="stopping containerd after receiving terminated"
Aug 25 19:21:49 api-31 systemd[1]: docker.service: State 'stop-sigterm' timed out. Killing.
Aug 25 19:21:49 api-31 systemd[1]: docker.service: Unit entered failed state.
Aug 25 19:21:49 api-31 systemd[1]: docker.service: Failed with result 'timeout'.
Aug 25 19:21:49 api-31 systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Aug 25 19:21:49 api-31 systemd[1]: Stopped Docker Application Container Engine.
Aug 25 19:21:49 api-31 systemd[1]: Closed Docker Socket for the API.
Aug 25 19:21:49 api-31 systemd[1]: Stopping Docker Socket for the API.
Aug 25 19:21:49 api-31 systemd[1]: Starting Docker Socket for the API.
Aug 25 19:21:49 api-31 systemd[1]: Listening on Docker Socket for the API.
Aug 25 19:21:49 api-31 systemd[1]: Starting Docker Application Container Engine...
Aug 25 19:21:49 api-31 docker[19023]: time="2016-08-25T19:21:49.913162167Z" level=info msg="New containerd process, pid: 19029\n"
Aug 25 19:21:50 api-31 kernel: [87066.742831] audit: type=1400 audit(1472152910.946:23): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="docker-default" pid=19043 comm="apparmor_parser"
Aug 25 19:21:50 api-31 docker[19023]: time="2016-08-25T19:21:50.952073973Z" level=info msg="[graphdriver] using prior storage driver \"overlay\""
Aug 25 19:21:50 api-31 docker[19023]: time="2016-08-25T19:21:50.956693893Z" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Aug 25 19:21:50 api-31 docker[19023]: time="2016-08-25T19:21:50.961641996Z" level=info msg="Firewalld running: false"
Aug 25 19:21:51 api-31 docker[19023]: time="2016-08-25T19:21:51.016582850Z" level=info msg="Removing stale sandbox 66ef9e1af997a1090fac0c89bf96c2631bea32fbe3c238c4349472987957c596 (547bceaad5d121444ddc6effbac3f472d0c232d693d8cc076027e238cf253613)"
Aug 25 19:21:51 api-31 docker[19023]: time="2016-08-25T19:21:51.046227326Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 25 19:21:51 api-31 docker[19023]: time="2016-08-25T19:21:51.081106790Z" level=warning msg="Your kernel does not support swap memory limit."
Aug 25 19:21:51 api-31 docker[19023]: time="2016-08-25T19:21:51.081650610Z" level=info msg="Loading containers: start."
Aug 25 19:22:01 api-31 kernel: [87076.922492] docker0: port 1(vethbbc1192) entered disabled state
Aug 25 19:22:01 api-31 kernel: [87076.927128] device vethbbc1192 left promiscuous mode
Aug 25 19:22:01 api-31 kernel: [87076.927131] docker0: port 1(vethbbc1192) entered disabled state
Aug 25 19:22:03 api-31 docker[19023]: .time="2016-08-25T19:22:03.085800458Z" level=warning msg="error locating sandbox id 66ef9e1af997a1090fac0c89bf96c2631bea32fbe3c238c4349472987957c596: sandbox 66ef9e1af997a1090fac0c89bf96c2631bea32fbe3c238c4349472987957c596 not found"
Aug 25 19:22:03 api-31 docker[19023]: time="2016-08-25T19:22:03.085907328Z" level=warning msg="failed to cleanup ipc mounts:\nfailed to umount /var/lib/docker/containers/547bceaad5d121444ddc6effbac3f472d0c232d693d8cc076027e238cf253613/shm: invalid argument"
Aug 25 19:22:03 api-31 kernel: [87078.882836] device veth5c6999c entered promiscuous mode
Aug 25 19:22:03 api-31 kernel: [87078.882984] IPv6: ADDRCONF(NETDEV_UP): veth5c6999c: link is not ready
Aug 25 19:22:03 api-31 systemd-udevd[19128]: Could not generate persistent MAC address for veth5c6999c: No such file or directory
Aug 25 19:22:03 api-31 systemd-udevd[19127]: Could not generate persistent MAC address for veth39fb4d3: No such file or directory
Aug 25 19:22:03 api-31 kernel: [87078.944218] docker0: port 1(veth5c6999c) entered disabled state
Aug 25 19:22:03 api-31 kernel: [87078.948636] device veth5c6999c left promiscuous mode
Aug 25 19:22:03 api-31 kernel: [87078.948640] docker0: port 1(veth5c6999c) entered disabled state
Aug 25 19:22:03 api-31 docker[19023]: time="2016-08-25T19:22:03.219677059Z" level=error msg="Failed to start container 547bceaad5d121444ddc6effbac3f472d0c232d693d8cc076027e238cf253613: rpc error: code = 6 desc = \"mkdir /run/containerd/547bceaad5d121444ddc6effbac3f472d0c232d693d8cc076027e238cf253613: file exists\""
Aug 25 19:22:03 api-31 docker[19023]: time="2016-08-25T19:22:03.219750430Z" level=info msg="Loading containers: done."
Aug 25 19:22:03 api-31 docker[19023]: time="2016-08-25T19:22:03.219776593Z" level=info msg="Daemon has completed initialization"
Aug 25 19:22:03 api-31 docker[19023]: time="2016-08-25T19:22:03.219847738Z" level=info msg="Docker daemon" commit=b9f10c9 graphdriver=overlay version=1.11.2
Aug 25 19:22:03 api-31 systemd[1]: Started Docker Application Container Engine.
Aug 25 19:22:03 api-31 docker[19023]: time="2016-08-25T19:22:03.226116336Z" level=info msg="API listen on /var/run/docker.sock"
#VonC - Thank you for pointing me at the right direction. I researched the thread, but in my case the apparmor is not an issue. There are some other issues mentioned in the thread, so I followed them and I found the solution.
SOLUTION:
On Ubuntu 16.04 the problem is that systemd kills process containerd with the docker daemon process. In order to prevent it, you need to add
KillMode=process
to /lib/systemd/system/docker.service and that fixes the issue.
Here are the sources I used:
https://github.com/docker/docker/issues/25246
https://github.com/docker/docker/blob/master/contrib/init/systemd/docker.service#L25
That seems to be followed by issue 25487 (August 2016), and was reported even before (April 2016) in issue 22195.
Check if you are in the situation mentioned in issue 21702 by Tõnis Tiigi:
This seems to be caused by the apparmor profile for docker daemon we have in docker/contrib/apparmor.
If this profile is applied in v1.11 (at least ubuntu wily) then container starting does not work.
I'm not sure if users have just manually enforced this profile or apparently we also accidentally installed this profile in 1.10.0-rc1 (#19707).
So the workaround, until we figure out how to deal with this, is to unload the profile with something like apparmor_parser -R /etc/apparmor.d/docker-engine ,delete it and restart daemon.
/etc/apparmor.d/docker is the profile for the containers and does not need to be changed.