Working in Jenkinsx build container ...
I'm trying to mount a volume while in docker container. The directory get's mounted, however, the files that exist on source ( host ) directory are not present in the container.
In this case, the host is a docker container as well, so basically I'm running docker-compose from docker container.
Does anyone experienced this issue and has a solution?
Here are the results :
bash-4.2# pwd
/home/jenkins
bash-4.2# ls -l datadir/
total 4
-rw-r--r-- 1 root root 4 May 15 20:06 foo.txt
bash-4.2# cat docker-compose.yml
version: '2.3'
services:
testing-wiremock:
image: rodolpheche/wiremock
volumes:
- ./datadir:/home/wiremock
bash-4.2# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 95G 24G 71G 25% /
tmpfs 7.4G 0 7.4G 0% /dev
tmpfs 7.4G 0 7.4G 0% /sys/fs/cgroup
/dev/sda1 95G 24G 71G 25% /etc/hosts
tmpfs 7.4G 4.0K 7.4G 1% /root/.m2
shm 64M 0 64M 0% /dev/shm
tmpfs 7.4G 4.0K 7.4G 1% /home/jenkins/.docker
tmpfs 7.4G 1.9M 7.4G 1% /run/docker.sock
tmpfs 7.4G 0 7.4G 0% /home/jenkins/.gnupg
tmpfs 7.4G 12K 7.4G 1% /run/secrets/kubernetes.io/serviceaccount
bash-4.2# docker-compose up -d
Creating network "jenkins_default" with the default driver
Creating jenkins_testing-wiremock_1 ... done
bash-4.2# docker ps |grep wiremock
6293dee408aa rodolpheche/wiremock "/docker-entrypoint.…" 26 seconds ago Up 25 seconds 8080/tcp, 8443/tcp jenkins_testing-wiremock_1
8db3b729c5d2 rodolpheche/wiremock "/docker-entrypoint.…" 21 minutes ago Up 21 minutes (unhealthy) 8080/tcp, 8443/tcp zendeskintegration_rest_1
bd52fb96036d rodolpheche/wiremock "/docker-entrypoint.…" 21 minutes ago Up 21 minutes (unhealthy) 8080/tcp, 8443/tcp zendeskintegration_zendesk_1
bash-4.2# docker exec -it 6293dee408aa bash
root#6293dee408aa:/home/wiremock# ls -ltr
total 8
drwxr-xr-x 2 root root 4096 May 15 20:06 mappings
drwxr-xr-x 2 root root 4096 May 15 20:06 __files
I could reproduce the issue by running this on a MacOS system:
First open a shell in a container that already has docker-compose installed:
docker run --rm -v $(pwd):/work -v /var/run/docker.sock:/var/run/docker.sock --workdir /work -ti tmaier/docker-compose sh
I map the current folder so that I can work with my current project as if it were on my host.
And then inside the container:
docker-compose run testing-wiremock ls -lart
Now change the docker-compose.yml to the following:
version: '2.3'
services:
testing-wiremock:
image: rodolpheche/wiremock
volumes:
- /tmp:/home/wiremock/
and run again:
docker-compose run testing-wiremock ls -lart
This will show you the contents of the /tmp directory on the host (where the docker socket actually runs). To test you can even create a folder and a file in the /tmp and run the "docker-compose run" again. You will see the new files.
Moral of the story:
If the mounted folder corresponds to an existing folder on the host where the docker daemon is actually running, then the mapping will actually work.
host -> container -> container (mounts here refer to paths on the host)
In your specific case the folder is mounted empty because the mounted path (check it by running docker-compose config) is not present on the host (host = the host running your Jenkins container, not the Jenkins container itself).
Related
I have purchased volume for my droplet in digital ocean and when I do docker compose build it takes up space on my current setup and I am not able to build my images.
My current setup is on
`/dev/vda1 25227048 25191932 18732 100% /`
Full Ubunto is :
udev 2013884 0 2013884 0% /dev
tmpfs 404632 5672 398960 2% /run
/dev/vda1 25227048 25191932 18732 100% /
tmpfs 2023160 0 2023160 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 2023160 0 2023160 0% /sys/fs/cgroup
/dev/vda15 106858 3437 103421 4% /boot/efi
tmpfs 404632 0 404632 0% /run/user/0
/dev/sda 103081248 93980 97728004 1% /mnt/volume_lon1_01
How do I build so it build on my new volume?
`/dev/sda 103081248 93980 97728004 1% /mnt/volume_lon1_01`
Fail into error now:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:10:01 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
If you want to use your new disk only for docker, you need to mount it in the docker base directory: /var/lib/docker.
But before doing it, you need to:
Stop the docker daemon completely sudo systemctl docker stop
Sync everything in the current directory to the new disk: sudo rsync -aqxP /var/lib/docker/ /mnt/volume_lon1_01
Delete the old content: sudo rm -rf /var/lib/docker/*
mount the new volume to the right place: sudo mount /dev/sda /var/lib/docker
Start docker daemon sudo systemctl start docker
Check that everything works properly - you can check if you still have your volume listed docker volume ls, or some local images docker images ls, or if you can start a new container docker run -ti alpine
Add the new mount definition into /etc/fstab*
You could also change the default directory of docker to use /mnt/volume_lon1_01.
If you want the second option, I recommend you to read https://linuxconfig.org/how-to-move-docker-s-default-var-lib-docker-to-another-directory-on-ubuntu-debian-linux
*For modifying the fstab, if you are not familiar with, you need a few information: what is the filesystem used by the partition, its path and where you want to mount it
After that, edit the file /etc/fstab and check if a line already exist with the partition path (/dev/sda for you). If not, add a new line, if yest, just edit it for changing the mount path to the new one.
How to find the partition filesystem already mounted: mount
This will return one line par partition and you need to check the type of the partition.
Example: rootfs on / type lxfs (rw,noatime), the partition type is lxfs
If you need to add a new line, it will be something like that:
/dev/sda /var/lib/docker <fs type> defaults 0 0
I was use df -hl for check the status of my vps, but it is seem like,the Storage is took by docker mutil time(i only have 1 wordpress in this vps, there is no have other project)
today i recived a email from linode, they tell me my Storage is finished
Total: 25600 MB
Used: 25600 MB
i have a wordpress in this vps, which is builded by docker and wordpress
here is the code which from my vps
root#localhost:~# df -hl
Filesystem Size Used Avail Use% Mounted on
udev 463M 0 463M 0% /dev
tmpfs 99M 5.9M 93M 6% /run
/dev/sda 25G 5.0G 19G 22% /
tmpfs 493M 0 493M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 493M 0 493M 0% /sys/fs/cgroup
overlay 25G 5.0G 19G 22% /var/lib/docker/overlay2/2ebf8af06fccd1e3a455746e257c990e6d85f848832eaadd636f48d56e6fbefb/merged
overlay 25G 5.0G 19G 22% /var/lib/docker/overlay2/28044ad06cc4b50d58a331cd644a254c7c90480ad04c1686f2974503da1c98de/merged
shm 64M 0 64M 0% /var/lib/docker/containers/932928ba7b7ccbbb4dd9f05263fadda8c6764ec7185deefc37c0fc555a2c32d5/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/67d10956ef387af8327570b7013cc113114a48ccf3654f9ee01041e88e740192/mounts/shm
overlay 25G 5.0G 19G 22% /var/lib/docker/overlay2/b81fd707a47702b060b462fbb1424bf024c4e593071b0782f4c817ca46a188e2/merged
shm 64M 0 64M 0% /var/lib/docker/containers/ce2422fff8741ede110a730d1283e0f43792de05a14b2ae9bdb59874fefa5fc2/mounts/shm
tmpfs 99M 0 99M 0% /run/user/0
root#localhost:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
932928ba7b7c wordpress:latest "docker-entrypoint.s…" 7 weeks ago Up 2 weeks 0.0.0.0:1994->80/tcp jujuzone_site
67d10956ef38 phpmyadmin/phpmyadmin "/run.sh supervisord…" 7 weeks ago Up 2 weeks 9000/tcp, 0.0.0.0:8081->80/tcp phpmyadmin
ce2422fff874 mysql:5.7 "docker-entrypoint.s…" 7 weeks ago Up 4 hours 3306/tcp, 33060/tcp db_jujuzone
root#localhost:~# docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N] y
Total reclaimed space: 0B
It seems you've deleted some files that might still be in use by the system.
Keep in mind that in this case, the df command can show a different size from the du command.
You can check that with more precision using du -hc on the same directories and check if it's total differs from the df command.
You can also run lsof |grep '(deleted)' to verify which files were left open for the file descriptor.
In this case, you can kill this process and restart the responsible daemon.
After all, you must consider to run docker system prune with -a flag to also clear unused images and maybe reclaim a little bit more space.
I was experiencing this same issue recently on Ubuntu 20.04 with Docker v19.03.13. I searched through Docker docs and found that this maybe because of a new type of filesystem they introduced. You can read more about it from here. The way I fixed it was by editing (creating if not present already) the /etc/docker/daemon.json file and adding the following lines:
{
"storage-driver": "overlay"
}
Then restarting docker using the following command:
sudo systemctl restart docker
I have answered a similar question here.
I'm trying to us cos to run some services on GCP.
One of the issues I'm seeing currently is that the VMs I've started very quickly seem to run out of inodes for the /var/lib/docker filesystem. I'd have expected this to be one of the things tuned in a container optimized os?
wouter#nbwm-cron ~ $ df -hi
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/root 78K 13K 65K 17% /
devtmpfs 463K 204 463K 1% /dev
tmpfs 464K 1 464K 1% /dev/shm
tmpfs 464K 500 463K 1% /run
tmpfs 464K 13 464K 1% /sys/fs/cgroup
tmpfs 464K 9 464K 1% /mnt/disks
tmpfs 464K 16K 448K 4% /tmp
/dev/sda8 4.0K 11 4.0K 1% /usr/share/oem
/dev/sda1 1013K 998K 15K 99% /var
tmpfs 464K 45 464K 1% /var/lib/cloud
overlayfs 464K 39 464K 1% /etc
wouter#nbwm-cron ~ $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<name>/stackdriver-agent latest 0c4b075e7550 3 days ago 1.423 GB
<none> <none> 96d027d3feea 4 days ago 905.2 MB
gcr.io/<project>/nbwm-ops/docker-php5 latest 5d2c59c7dd7a 2 weeks ago 1.788 GB
nbwm-cron wouter # tune2fs -l /dev/sda1
tune2fs 1.43.3 (04-Sep-2016)
Filesystem volume name: STATE
Last mounted on: /var
Filesystem UUID: ca44779b-ffd5-405a-bd3e-528071b45f73
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Remount read-only
Filesystem OS type: Linux
Inode count: 1036320
Block count: 4158971
Reserved block count: 0
Free blocks: 4062454
Free inodes: 1030756
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Reserved GDT blocks: 747
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8160
Inode blocks per group: 510
Flex block group size: 16
Filesystem created: Thu Jun 15 22:39:33 2017
Last mount time: Wed Jun 28 13:51:31 2017
Last write time: Wed Jun 28 13:51:31 2017
Mount count: 5
Maximum mount count: -1
Last checked: Thu Nov 19 19:00:00 2009
Check interval: 0 (<none>)
Lifetime writes: 67 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 66aa0e7f-57da-41d0-86f7-d93270e53030
Journal backup: inode blocks
How do I tune the filesystem to have more inodes available?
This is a known issue with the overlay storage driver in docker and is addressed by the overlay2 driver.
The new cos-61 releases use docker 17.03 with overlay2 storage driver. Could you please give it a try and see if the issue happens again?
Thanks!
I have witnessed the same issue with all COS versions from 57.9202.64.0 (docker 1.11.2) on GKE 1.5 to 65.10323.85.0 (docker 17.03.2) on GKE 1.8.12-gke.1. Older version were certainly affected too.
Those all use the overlay driver:
pdecat#gke-cluster-test-pdecat-default-pool-e8945081-xhj6 ~ $ docker info 2>&1 | grep "Storage Driver"
Storage Driver: overlay
pdecat#gke-cluster-test-pdecat-default-pool-e8945081-xhj6 ~ $ grep "\(CHROMEOS_RELEASE_VERSION\|CHROMEOS_RELEASE_CHROME_MILESTONE\)" /etc/lsb-release
CHROMEOS_RELEASE_CHROME_MILESTONE=65
CHROMEOS_RELEASE_VERSION=10323.85.0
The overlay2 driver is only used for GKE 1.9+ clusters (fresh or upgraded) with the same COS version:
pdecat#gke-cluster-test-pdecat-default-pool-e8945081-xhj6 ~ $ docker info 2>&1 | grep "Storage Driver"
Storage Driver: overlay2
pdecat#gke-cluster-test-pdecat-default-pool-e8945081-xhj6 ~ $ grep "\(CHROMEOS_RELEASE_VERSION\|CHROMEOS_RELEASE_CHROME_MILESTONE\)" /etc/lsb-release
CHROMEOS_RELEASE_CHROME_MILESTONE=65
CHROMEOS_RELEASE_VERSION=10323.85.0
When the free space/inodes issue occurs with the overlay driver, I resolve it using spotify's docker-gc:
# docker run --rm --userns host -v /var/run/docker.sock:/var/run/docker.sock -v /etc:/etc spotify/docker-gc
Before:
# df -hi /var/lib/docker/
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 6.0M 5.0M 1.1M 83% /var
# df -h /var/lib/docker/
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 95G 84G 11G 89% /var
# du --inodes -s /var/lib/docker/*
180 /var/lib/docker/containers
4093 /var/lib/docker/image
4 /var/lib/docker/network
4906733 /var/lib/docker/overlay
1 /var/lib/docker/tmp
1 /var/lib/docker/trust
25 /var/lib/docker/volumes
After:
# df -hi /var/lib/docker/
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 6.0M 327K 5.7M 6% /var/lib/docker
# df -h /var/lib/docker/
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 95G 6.6G 88G 7% /var/lib/docker
# du --inodes -s /var/lib/docker/*
218 /var/lib/docker/containers
1792 /var/lib/docker/image
4 /var/lib/docker/network
279002 /var/lib/docker/overlay
1 /var/lib/docker/tmp
1 /var/lib/docker/trust
25 /var/lib/docker/volumes
Note: using the usual docker rmi $(docker images --filter "dangling=true" -q --no-trunc) and docker rm $(docker ps -qa --no-trunc --filter "status=exited") did not help to recover resources in /var/lib/docker/overlay.
I was using Docker on my CentOS machine for a while and had lot of images and containers (around 4GBs). My machine has 8GBs os storage and I kept getting an error from devicemapper whenever trying to remove a Docker container or Docker image with docker rm or docker rmi. The error was: Error response from daemon: Driver devicemapper failed to remove root filesystem. So I stopped the Docker service and tried restarting it, but that failed due to devicemapper. After that I uninstalled Docker and removed all images, containers, and volumes by running the following command: rm -rf /var/lib/docker. However, after running that it does not seem like any space was freed up:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 8.0G 7.7G 346M 96% /
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
tmpfs 1.8G 193M 1.6G 11% /run
tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
tmpfs 361M 0 361M 0% /run/user/1000
$ du -ch -d 1 | sort -hr
3.6G total
3.6G .
1.7G ./usr
903M ./var
433M ./home
228M ./opt
193M ./run
118M ./boot
17M ./etc
6.4M ./tmp
4.0K ./root
0 ./sys
0 ./srv
0 ./proc
0 ./mnt
0 ./media
0 ./dev
Why does df tell me I am using 7.7G whereas du tells me I am using 3.6G? The figure that du gives (3.6G) should be the correct one since I deleted everything in /var/lib/docker.
I had a similar issue. This ticket was helpful.
Depending on the file system you are using, you will want to use either fstrim, zerofree or add the drive to another machine or and use use xfs_repair
If your file system is xfs and you used xfs_repair then after running that command there should be a lost+found directory at the root of the drive that contains all the data that was taking upspace but unreachable.
You can then delete that and it will actually be reflected in du.
I'm playing with volume containers on boot2docker to run Docker on MacOS X.
boot2docker version
Client version: v1.2.0
Git commit: a551732
I'm trying to perform the backup/restore process which is mentioned in Docker's documentation.
I'm trying to backup a MySQL database which is over 2 GB. When I run the backup command:
docker run --volumes-from data_volume -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /var/lib/mysql
...if fails with this error:
tar: /backup/backup.tar: Wrote only 4096 of 10240 bytes
tar: Error is not recoverable: exiting now
It seems tar is out of disk space. So I got into my container and looked at the host bind mount and its size is 1.8 GB.
docker run -t -i -v $HOME:/demo ubuntu /bin/bash
root#bb3921a48ba4:/# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 19G 8.3G 9.1G 48% /
none 19G 8.3G 9.1G 48% /
tmpfs 1005M 0 1005M 0% /dev
shm 64M 0 64M 0% /dev/shm
/dev/sda1 19G 8.3G 9.1G 48% /etc/hosts
tmpfs 1.8G 1.8G 0 100% /demo
tmpfs 1005M 0 1005M 0% /proc/kcore
You can see that /demo is only 1.8G...
I don't know how to extend this size so I would be able to make large backups...
Any idea? Thanks!
I have this sneaking feeling that you're running out of memory - as 2GB is the default amount of ram we allocate.
Rather than writing to a file, mapped to a virtual filesystem that is attached to your OSX box's FS, I'd suggest running the tar to output to STDOUT, and then pipe that to your local box.
ie
docker run --rm ubuntu tar cf - /etc > test.tar