running Ubuntu 16.04.5 LTS 4.4.0-108-generic on a virtual private server
My aim is to deploy a meteor application with mup.js,
but it fails because of dockerd not being launched. The problem is, i cannot get dockerd to launch after a system reboot.
I tried changing graphdriver as suggested in other threads
(Not able to start docker on Ubuntu 16.04.2 LTS (error initializing graphdriver)), switching to aufs or overlay2, but to no avail. I also updated my kernel, purged docker repos, reinstalled docker on my machine.
i have close to no experience working with docker, and the website i'm trying to put back online is part of a show, the last night of which is tomorrow! i must say i'm getting a bit desperate, any help is welcome.
thank you!
docker & dockerd are both version 18.06.1-ce, build e68fc7a
$ sudo dockerd
INFO[0000] libcontainerd: new containerd process, pid: 3488
WARN[0000] containerd: low RLIMIT_NOFILE changing to max current=1024 max=1048576
WARN[0000] failed to rename /var/lib/docker/tmp for background deletion: %!s(<nil>). Deleting synchronously
Error starting daemon: error initializing graphdriver: driver not supported
journalctl -xe yields :
Oct 03 01:22:19 vps332343 systemd[1]: Listening on Docker Socket for the API.
-- Subject: Unit docker.socket has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.socket has finished starting up.
--
-- The start-up result is done.
Oct 03 01:22:19 vps332343 systemd[1]: docker.service: Start request repeated too quickly.
Oct 03 01:22:19 vps332343 systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has failed.
--
-- The result is failed.
Oct 03 01:22:19 vps332343 systemd[1]: docker.socket: Unit entered failed state.
Oct 03 01:22:37 vps332343 sudo[3651]: eboutin : TTY=pts/0 ; PWD=/etc/nginx/sites-available ; USER=root ; COMMAND=/bin/journalctl -xe
Oct 03 01:22:37 vps332343 sudo[3651]: pam_unix(sudo:session): session opened for user root by eboutin(uid=0)
df -tH yields :
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 970M 0 970M 0% /dev
tmpfs tmpfs 196M 5.6M 190M 3% /run
/dev/vda1 ext4 9.7G 4.6G 5.1G 48% /
copymods tmpfs 977M 28K 977M 1% /lib/modules
tmpfs tmpfs 977M 68K 977M 1% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 977M 0 977M 0% /sys/fs/cgroup
tmpfs tmpfs 196M 0 196M 0% /run/user/1002
tmpfs tmpfs 196M 0 196M 0% /run/user/1001
/etc/docker/daemon.json contents :
{"storage-driver":"devicemapper"}
(no other modified config file)
Try to configure devicemapper as a storage driver and clean /var/lib/docker/ folder before docker start rm -rf /var/lib/docker/* (it will delete all your previous containers/volumes/...).
Check any warnings from docker info, when docker will be running - they may help you with additional configuration.
This can also be due to a recent kernel update that might have messed up the graphdriver: devicemapper.
So when rm -rf /var/lib/docker/* and reinstalling Docker does not work. Try reinstalling kernel image and reboot.
$ sudo apt-get install --reinstall linux-image-`uname -r`
$ sudo reboot
Related
Current Setup:
Machine OS: Windows 7
Vmware: VMWare workstation 8.0.2-591240
VM: Ubuntu LTS 16.04
Docker on Ubuntu: Docker Engine Community version 19.03.5
I have setup docker containers to run bamboo agents recently. It's keep running out of space after. Can anyone please suggest me mounting options or any other tips to keep the volume down?
Ps. I had the similar setup before and it was all good until the VM got corrupted and need to setup the new VM.
root#ubuntu:/# df -h
Filesystem Size Used Avail Use% Mounted on
udev 5.8G 0 5.8G 0% /dev
tmpfs 1.2G 113M 1.1G 10% /run
/dev/sda1 12G 12G 0 100% /
tmpfs 5.8G 0 5.8G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
tmpfs 1.2G 0 1.2G 0% /run/user/1000
overlay 12G 12G 0 100% /var/lib/docker/overlay2/e0e78a7d84da9c2a1e1c9f91ee16bc6515d8660e1a2db5e207504469f9e496ae/merged
overlay 12G 12G 0 100% /var/lib/docker/overlay2/8f3a73cd0b201f4a8a92ded0cfab869441edfbc2199574c225adbf78a2393129/merged
overlay 12G 12G 0 100% /var/lib/docker/overlay2/3d947960c28e834aa422b5ea16c261739d06bf22fe0f33f9e0248d233f2a84d1/merged
12G is quite a low space to be able to leverage cached images to speed up the building process. So, assuming you don't want to expand the root partition of that VM, what you can do is clean up images after every build, or every X builds.
For example, I follow the second approach, I run a cleaner job every night in my Jenkins agents to prevent the disk getting out of space.
Docker installation by default takes your /var space. Cleaning up your unused containers will work for some time and stop yielding you when you really cant delete more. The only way is to map your data-root of your daemon to a more available disk space. You can do the same by configuring below param, data-root in your daemon.json file.
{
“data-root”: “/new/path/to/docker-data”
}
Once you have done that do a systemctl daemon-reload to reload the configuration changes. Doing this will make docker copy all existing container volume data to the new path. This will resolve your space issue permanently. If you wish not to kill your running containers during daemon-reload you must have configured live-restore property in your daemon.json file. Hope this helps.
I have purchased volume for my droplet in digital ocean and when I do docker compose build it takes up space on my current setup and I am not able to build my images.
My current setup is on
`/dev/vda1 25227048 25191932 18732 100% /`
Full Ubunto is :
udev 2013884 0 2013884 0% /dev
tmpfs 404632 5672 398960 2% /run
/dev/vda1 25227048 25191932 18732 100% /
tmpfs 2023160 0 2023160 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 2023160 0 2023160 0% /sys/fs/cgroup
/dev/vda15 106858 3437 103421 4% /boot/efi
tmpfs 404632 0 404632 0% /run/user/0
/dev/sda 103081248 93980 97728004 1% /mnt/volume_lon1_01
How do I build so it build on my new volume?
`/dev/sda 103081248 93980 97728004 1% /mnt/volume_lon1_01`
Fail into error now:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:10:01 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
If you want to use your new disk only for docker, you need to mount it in the docker base directory: /var/lib/docker.
But before doing it, you need to:
Stop the docker daemon completely sudo systemctl docker stop
Sync everything in the current directory to the new disk: sudo rsync -aqxP /var/lib/docker/ /mnt/volume_lon1_01
Delete the old content: sudo rm -rf /var/lib/docker/*
mount the new volume to the right place: sudo mount /dev/sda /var/lib/docker
Start docker daemon sudo systemctl start docker
Check that everything works properly - you can check if you still have your volume listed docker volume ls, or some local images docker images ls, or if you can start a new container docker run -ti alpine
Add the new mount definition into /etc/fstab*
You could also change the default directory of docker to use /mnt/volume_lon1_01.
If you want the second option, I recommend you to read https://linuxconfig.org/how-to-move-docker-s-default-var-lib-docker-to-another-directory-on-ubuntu-debian-linux
*For modifying the fstab, if you are not familiar with, you need a few information: what is the filesystem used by the partition, its path and where you want to mount it
After that, edit the file /etc/fstab and check if a line already exist with the partition path (/dev/sda for you). If not, add a new line, if yest, just edit it for changing the mount path to the new one.
How to find the partition filesystem already mounted: mount
This will return one line par partition and you need to check the type of the partition.
Example: rootfs on / type lxfs (rw,noatime), the partition type is lxfs
If you need to add a new line, it will be something like that:
/dev/sda /var/lib/docker <fs type> defaults 0 0
I was using Docker on my CentOS machine for a while and had lot of images and containers (around 4GBs). My machine has 8GBs os storage and I kept getting an error from devicemapper whenever trying to remove a Docker container or Docker image with docker rm or docker rmi. The error was: Error response from daemon: Driver devicemapper failed to remove root filesystem. So I stopped the Docker service and tried restarting it, but that failed due to devicemapper. After that I uninstalled Docker and removed all images, containers, and volumes by running the following command: rm -rf /var/lib/docker. However, after running that it does not seem like any space was freed up:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 8.0G 7.7G 346M 96% /
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
tmpfs 1.8G 193M 1.6G 11% /run
tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
tmpfs 361M 0 361M 0% /run/user/1000
$ du -ch -d 1 | sort -hr
3.6G total
3.6G .
1.7G ./usr
903M ./var
433M ./home
228M ./opt
193M ./run
118M ./boot
17M ./etc
6.4M ./tmp
4.0K ./root
0 ./sys
0 ./srv
0 ./proc
0 ./mnt
0 ./media
0 ./dev
Why does df tell me I am using 7.7G whereas du tells me I am using 3.6G? The figure that du gives (3.6G) should be the correct one since I deleted everything in /var/lib/docker.
I had a similar issue. This ticket was helpful.
Depending on the file system you are using, you will want to use either fstrim, zerofree or add the drive to another machine or and use use xfs_repair
If your file system is xfs and you used xfs_repair then after running that command there should be a lost+found directory at the root of the drive that contains all the data that was taking upspace but unreachable.
You can then delete that and it will actually be reflected in du.
I been tackling trying to get docker-in-docker working for a CentOS 7 image, with ubuntu as the host image.
As of now i have not started building this as a docker image. And is currently experimenting with bash on how to "get docker in docker to work"
Currently systemctl start docker ran inside the inner docker image CentOS gives the following error
Error: No space left on device
Job for docker.service failed. See 'systemctl status docker.service' and 'journalctl -xn' for details.
Further investigation on the error systemctl status docker gives the following
Oct 13 04:32:08 codenvy docker[6520]: time="2015-10-13T04:32:08Z" level=info msg="Listening for HTTP on unix (/var/run/docker.sock)"
Oct 13 04:32:08 codenvy docker[6520]: time="2015-10-13T04:32:08Z" level=warning msg="Running modprobe bridge nf_nat failed with message: , error: exit status 1"
Oct 13 04:32:08 codenvy docker[6520]: time="2015-10-13T04:32:08Z" level=info msg="-job init_networkdriver() = OK (0)"
Oct 13 04:32:09 codenvy docker[6520]: time="2015-10-13T04:32:09Z" level=warning msg="Your kernel does not support cgroup swap limit."
Oct 13 04:32:09 codenvy docker[6520]: time="2015-10-13T04:32:09Z" level=info msg="Loading containers: start."
Oct 13 04:32:09 codenvy docker[6520]: time="2015-10-13T04:32:09Z" level=info msg="Loading containers: done."
Oct 13 04:32:09 codenvy docker[6520]: time="2015-10-13T04:32:09Z" level=fatal msg="Shutting down daemon due to errors: inotify_add_watch: no space left on device"
Oct 13 04:32:09 codenvy systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Oct 13 04:32:09 codenvy systemd[1]: Failed to start Docker Application Container Engine.
Oct 13 04:32:09 codenvy systemd[1]: Unit docker.service entered failed state.
Additional Information
Host OS: Ubuntu 14.04.2 LTS
Docker Image: codenvy/onprem-multi (which is based on centos:centos7)
Mounted Volumes
/sys/fs/cgroup
/sys/fs/cgroup:/sys/fs/cgroup:ro
/mnt/docker-files-lvm/docker/codenvy/docker:/var/lib/docker
/mnt/docker-files-lvm/docker/codenvy/ldap:/var/lib/ldap
/mnt/docker-files-lvm/docker/codenvy/mongo:/var/lib/mongo
/mnt/docker-files-lvm/docker/codenvy/home:/home
Privileged mode
Note
This is not regarding how to install codenvy, its regarding getting docker itself installed and working. Before installing codenvy
Added: df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/hc--dawn--vg-root 27G 3.5G 23G 14% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 16G 12K 16G 1% /dev
tmpfs 3.2G 1.1M 3.2G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 16G 37M 16G 1% /run/shm
none 100M 0 100M 0% /run/user
/dev/sda1 236M 95M 129M 43% /boot
/dev/mapper/base--storage-docker--files 886G 52G 790G 7% /mnt/docker-files-lvm
Note: /mnt/docker-files-lvm/docker maps to the /mnt/docker-files-lvm (which is 790GB)
If it is a no space left problem you can configure docker to have it store images and containers elsewhere.
Since you are using systemctl the config file is located here:
/lib/systemd/system/docker.service
You can add a -g option to change where docker store things.
For example:
ExecStart=/usr/bin/docker daemon -g /there_is_space_here -H fd://
I switch storage to devicemapper, and try to grow the rootfs for container to 40G.
I add the following to config to /var/lib/boot2docker/profile and reboot boot2docker's VM
[/var/lib/boot2docker/profile start]<br>
\#!/bin/sh<br>
EXTRA_ARGS="--storage-opt dm.basesize=40G --storage-driver=devicemapper"<br>
[/var/lib/boot2docker/profile end]<br>
The docker.log shows the config is taking effect
[/var/lib/boot2docker/docker.log snippet start]<br>
/usr/local/bin/docker -d -D -g "/var/lib/docker" -H unix:// -H tcp://0.0.0.0:2376 **--storage-opt dm.basesize=40G --storage-driver=devicemapper** --tlsverify --tlscacert=/var/lib/boot2docker/tls/ca.pem --tlscert=/var/lib/boot2docker/tls/server.pem --tlskey=/var/lib/boot2docker/tls/serverkey.pem >> "/var/lib/boot2docker/docker.log"<br>
2014/12/22 03:33:36 docker daemon: 1.3.2 39fa2fa; execdriver: native; graphdriver: devicemapper<br>
[74c56fa4] +job serveapi(unix:///var/run/docker.sock, tcp://0.0.0.0:2376)<br>
[debug] deviceset.go:565 Generated prefix: docker-8:1-784941<br>
[debug] deviceset.go:568 Checking for existence of the pool 'docker-8:1-784941-pool'<br>
[debug] deviceset.go:587 Pool doesn't exist. Creating it.<br>
[/var/lib/boot2docker/docker.log snippet end]<br>
However, the container's rootfs is still bound to 20G instead of the 40G shown in configuration
[df -h in container start]<br>
[root#sshd ~]\# df -h<br>
Filesystem Size Used Avail Use% Mounted on<br>
rootfs **20G** 401M 19G 3% /<br>
/dev/mapper/docker-8:1-784941-8184b64c9275276c9420f5decd0b1d31dc8bce725ecbd93a918407363b45b2d3<br>
20G 401M 19G 3% /<br>
tmpfs 2.0G 0 2.0G 0% /dev<br>
shm 64M 0 64M 0% /dev/shm<br>
/dev/sda1 192G 6.9G 175G 4% /etc/resolv.conf<br>
/dev/sda1 192G 6.9G 175G 4% /etc/hostname<br>
/dev/sda1 192G 6.9G 175G 4% /etc/hosts<br>
tmpfs 2.0G 0 2.0G 0% /proc/kcore<br>
[root#sshd ~]\#<br>
[df -h in container end]
The --storage-opt dm.basesize does not seem to work at all, how do I fix this ?
It turns out that new dm.basesize only works if I pull a new image.
It didn't work because I use "docker load < /xxx" to load local backup tar image.