Can take incremental LVM snapshots in linux? - lvm

I have just made a lvm snapshot of /opt partition then mounted that lvm to a /data.Is there is any way to take incremental lvm snapshots ?

yes, it is possible
try this, to create new LV snapshot:
lvcreate --size 100M --snapshot --name snap /dev/vg00/lvol1
read more at Red Hat Portal

Related

Colima: increase docker image size limit

I'm running docker through colima and my total images size hit ~10GBs. I need to increase this size in order to continue.
Is there a way to define this somewhere in colima?
I had the same issue and it is possible to customise Colima VMs CPU, Memory (GB) and Disk (GiB):
colima start --cpu 4 --memory 4 --disk 100
But it is weird because the documentation states:
the default VM created by Colima has 2 CPUs, 2GiB memory and 60GiB
storage
Colima - Customizing the VM
The default VM created by Colima has 2 CPUs, 2GiB memory and 60GiB storage.
The VM can be customized either by passing additional flags to colima start. e.g. --cpu, --memory, --disk, --runtime. Or by editing the config file with colima start --edit.
NOTE: disk size cannot be changed after the VM is created.
Customization Examples
create VM with 1CPU, 2GiB memory and 10GiB storage.
colima start --cpu 1 --memory 2 --disk 10
modify an existing VM to 4CPUs and 8GiB memory.
colima stop
colima start --cpu 4 --memory 8
resize storage to 100 GB.
Due to disk size cannot be changed after the VM is created.
work around: we can destroy and recreate the VM if we accept data lost
colima delete
colima start --disk 100
Reference
https://stackoverflow.com/a/74402260/9345651 by #Carlos Cavero
https://github.com/abiosoft/colima#customizing-the-vm
https://github.com/abiosoft/colima/blob/main/docs/FAQ.md

Increasing Docker partition size on Google Cloud Compute instance running container image

I have an Apache server in production that is running in a Docker container, which I've deployed to a Google Compute instance using the "gcloud compute instances create-with-container" command. The /var/www/html folder is mounted onto the container from the boot disk of the computer instance to make it persistent, using the --container-mount-host-path flag:
gcloud compute instances create-with-container $INSTANCE_NAME \
--zone=europe-north1-a \
--container-image gcr.io/my-project/my-image:latest \
--container-mount-host-path mount-path=/var/www/html,host-path=/var/www/html,mode=rw \
--machine-type="$MACHINE_TYPE"
But now I've ran into the problem that the size of the Docker partition is only 5.7G!
Output of df -h:
...
/dev/sda1 5.7G 3.6G 2.2G 62% /mnt/stateful_partition
overlay 5.7G 3.6G 2.2G 62% /var/lib/docker/overlay2/4f223d8157033ce937a79af741df3eadf79a02d2d003f01a085301ff66884bf2/merged
overlay 5.7G 3.6G 2.2G 62% /var/lib/docker/overlay2/86316491e2bb20bc300c1cc55c9f9254001ed77d6ec7f05f716af1e52fe15f53/merged
...
I had assumed that the partition size would increase automatically, but I ran into the problem where the website couldn't write files onto disk anymore because the partition was full. As a quick fix, I ran "docker prune -a" (there were a bunch of old images hanging around) on the host machine to make some more space on the docker partition.
So my question is, what is the proper way of increasing the size of the partition?
You can resize the boot disk in the Google Cloud Console GUI. However, since this is a container host, I recommend deleting the virtual machine instance and creating a new instance with the correct configuration.
The default disk size is usually 10 GB. To create a virtual machine instance with a larger disk, specify that when creating the instance.
Add the following to your CLI command:
--boot-disk-size=32GB
Optionally specify the type of persistent disk to control costs:
--boot-disk-type=pd-standard
gcloud compute instances create-with-container

How can I fix 'No space left on device' error in Docker?

I'm running a Mac-native Docker (no virtualbox/docker-machine).
I have a huge image with a lot of infrastructure in it (Postgres, etc.).
I have run cleanup scripts to get rid of a lot of cruft--unused images and so forth.
When I run my image I get an error like:
could not create directory "/var/lib/postgresql/data/pg_xlog": No space left on device
On my host Mac /var is sitting at 60% space available and generally my disk has lots of storage free.
Is this some Docker configuration I need to bump up to give it more resources?
Relevant lines from mount inside docker:
none on / type aufs (rw,relatime,si=5b19fc7476f7db86,dio,dirperm1)
/dev/vda1 on /data type ext4 (rw,relatime,data=ordered)
/dev/vda1 on /etc/resolv.conf type ext4 (rw,relatime,data=ordered)
/dev/vda1 on /etc/hostname type ext4 (rw,relatime,data=ordered)
/dev/vda1 on /etc/hosts type ext4 (rw,relatime,data=ordered)
/dev/vda1 on /var/lib/postgresql/data type ext4 (rw,relatime,data=ordered)
Here’s df:
[11:14]
Filesystem 1K-blocks Used Available Use% Mounted on
none 202054928 4333016 187269304 3% /
tmpfs 1022788 0 1022788 0% /dev
tmpfs 1022788 0 1022788 0% /sys/fs/cgroup
/dev/vda1 202054928 4333016 187269304 3% /data
shm 65536 4 65532 1% /dev/shm
tmpfs 204560 284 204276 1% /run/docker.sock
I haven't found many options for this, the main issue in github is https://github.com/docker/for-mac/issues/371
Some of the options suggested there are:
If you can remove all images/containers, you can follow these instructions:
docker rm $(docker ps -a -q)
docker rmi $(docker images -q)
docker volume rm $(docker volume ls |awk '{print $2}')
rm -rf ~/Library/Containers/com.docker.docker/Data/*
You can try to prune all unused images/containers but this has proven not very effective:
docker system prune
Use a template image that is larger, install qemu using homebrew and move the image around, see this specific comment: https://github.com/docker/for-mac/issues/371#issuecomment-242047368 but you need to have at least 2x of space left to do this without losing containers/images.
See also: How do you get around the size limitation of Docker.qcow2 in the Docker for Mac?
And https://forums.docker.com/t/no-space-left-on-device-error/10894/26
I ran into the same issue, running docker system prune --volumes resolved the problem.
"Volumes are not pruned by default, and you must specify the --volumes flag for docker system prune to prune volumes."
See: https://docs.docker.com/config/pruning/#prune-everything
I ran into this recently on with a docker installation on linux that uses the devicemapper storage driver (default). There was indeed a docker configuration I needed to change to fix this.
Docker images are made of read-only layers of filesystem snapshots, each layer created by a command in your Dockerfile, which are built on top of a common base storage snapshot. The base snapshot is shared by all your images and has a file system with a default size of 10GB. When you run your image you get a new writable layer on top of all the layers in the image, so you can add new files in your running container but it's still eventually based on the same base snapshot with the 10GB filesystem. This is at least true for devicemapper, not sure about other drivers. Here is the relevant documentation from docker.
To change this default value to something else, there's a daemon parameter you can set, e.g. docker daemon --storage-opt dm.basesize=100G. Since you probably don't run the daemon manually need to edit the docker daemon options in some file, depending on how you run the docker daemon. With docker for mac you can edit the daemon parameters as JSON in the preferences under Daemon->Advanced. You probably need to add something like this:
{
"storage-opts": ["dm.basesize=100G"]
}
(but like I said, I had this problem on linux, so didn't try the above).
Anyway in order for this to take effect, you'll need to remove all your existing images (so that they're re-created on top of the new base snapshot with the new size). See storage driver options.

Clean docker environment: devicemapper

I have a docker environment with 2 containers (Jenkins and Nexus, both with their own named volume).
I have a daily cron-job which deletes unused containers and images. This is working fine. But the problem is inside my devicemapper:
du -sh /var/lib/docker/
30G docker/
I can each folder in my docker folder:
Volumes (big, but that's normal in my case):
/var/lib/docker# du -sh volumes/
14G volumes/
Containers:
/var/lib/docker# du -sh containers/
3.2M containers/
Images:
/var/lib/docker# du -sh image/
5.8M image/
Devicemapper:
/var/lib/docker# du -sh devicemapper/
16G devicemapper/
/var/lib/docker/devicemapper/mnt is 7.3G
/var/lib/docker/devicemapper/devicemapper is 8.1G
Docker info:
Storage Driver: devicemapper
Pool Name: docker-202:1-xxx-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: ext4
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 5.377 GB
Data Space Total: 107.4 GB
Data Space Available: 28.8 GB
Metadata Space Used: 6.148 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.141 GB
Udev Sync Supported: true
What is this space and am I able to clean this without breaking stuff?
Don't use a devicemapper loop file for anything serious! Docker has big warnings about this.
The /var/lib/docker/devicemapper/devicemapper directory contains the sparse loop files that contain all the data that docker mounts. So you would need to use lvm tools to trawl around them and do things. Have a read though the remove issues with devicemapper, they are kinda sorta resolved but maybe not.
I would move away from devicemapper where possible or use LVM thin pools on anything RHEL based. If you can't change storage drivers, the same procedure will at least clear up any allocated sparse space you can't reclaim.
Changing the docker storage driver
Changing storage driver will require dumping your /var/lib/docker directories which contains all your docker data. There are ways to save portions of it but that involves messing around with Docker internals. Better to commit and export any containers or volumes you want to keep and import them after the change. Otherwise you will have a fresh, blank Docker install!
Export data
Stop Docker
Remove /var/lib/docker
Modify your docker startup to use the new storage driver.
Set --storage-driver=<name> in /lib/systemd/system/docker.service or /etc/systemd/system/docker.service or /etc/default/docker or /etc/sysconfig/docker
Start Docker
Import Data
AUFS
AUFS is not in the mainline kernel (and never will be) which means distro's have to actively include it somehow. For Ubuntu it's in the linux-image-extra packages.
apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
Then change the storage driver option to --storage-driver=aufs
OverlayFS
OverlayFS is already available in Ubuntu, just change the storage driver to --storage-driver=overlay2 or --storage-driver=overlay if you are still using a 3.x kernel
I'm not sure how good an idea this is right now. It can't be much worse than the loop file but
The overlay2 driver is pretty solid for dev use but isn't considered production ready yet (e.g. Docker Enterprise don't provide support) but it is being pushed to become the standard driver due to the AUFS/Kernel issues.
Direct LVM Thin Pool
Instead of the devicemapper loop file you can use an LVM thin pool directly. RHEL makes this easy with a docker-storage-setup utility that distributed with their EPEL docker package. Docker have detailed steps for setting up the volumes manually.
--storage-driver=devicemapper \
--storage-opt=dm.thinpooldev=/dev/mapper/docker-thinpool \
--storage-opt dm.use_deferred_removal=true
Docker 17.06+ supports managing simple direct-lvm block device setups for you.
Just don't run out of space in the LVM volume, ever. You end up with an unresponsive Docker daemon that needs to be killed and then LVM resources that are still in use that are hard to clean up.
A periodic docker system prune -a works for me on systems where I use devicemapper and not the LVM thinpool. The pattern I use is:
I label any containers, images, etc with label "protected" if I want them to be exempt from cleanup
I then periodically run docker system prune -a --filter=label!=protected (either manually or on cron with -f)
Labeling examples:
docker run --label protected ...
docker create --label=protected=true ...
For images, Dockerfile's LABEL, eg LABEL protected=true
To add a label to an existing image that I cannot easily rebuild, I make a 2 line Dockerfile with the above, build a new image, then switch the new image for the old one (tag).
General Docker label documentation
First, what is devicemapper (official documentation)
Device Mapper has been included in the mainline Linux kernel since version 2.6.9 [in 2005]. It is a core part of RHEL family of Linux distributions.
The devicemapper driver stores every image and container on its own virtual device. These devices are thin-provisioned copy-on-write snapshot devices.
Device Mapper technology works at the block level rather than the file level. This means that devicemapper storage driver's thin provisioning and copy-on-write operations work with blocks rather than entire files.
The devicemapper is the default Docker storage driver on some Linux distributions.
Docker hosts running the devicemapper storage driver default to a configuration mode known as loop-lvm. This mode uses sparse files to build the thin pool used by image and container snapshots
Docker 1.10 [from 2016] and later no longer matches image layer IDs with directory names in /var/lib/docker.
However, there are two key directories.
The /var/lib/docker/devicemapper/mnt directory contains the mount points for image and container layers.
The /var/lib/docker/devicemapper/metadatadirectory contains one file for every image layer and container snapshot.
If your docker info does show your Storage Driver is devicemapper (and not aufs), proceed with caution with those folders.
See for instance issue 18867.
I faced the same issue where in my /var/lib/docker/devicemapper/devicemapper/data file has reached ~91% of root volume(~45G of 50G). I tried removing all the unwanted images, deleted volumes, nothing helped in reducing this file.
Did a few googling and understood that the "data" files is loopback-mounted sparse files and docker uses it to store the mount locations and other files we would have stored inside the containers.
Finally I removed all the images which were run before and stopped
Warning: Deletes all docker containers
docker rm $(docker ps -aq)
The reduced the devicemapper file significantly. Hope this may help you
.

Set disc quota with DeviceMapper

I have changed storage plugin to DeviceMapper. Docker info gives following output.
Server Version: 1.9.0
Storage Driver: devicemapper
Pool Name: docker-253:1-16-pool
Pool Blocksize: 65.54 kB
Base Device Size: 107.4 GB
Backing Filesystem: extfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 1.821 GB
Data Space Total: 268.4 GB
Data Space Available: 11.66 GB
Metadata Space Used: 2.101 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.145 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.90 (2014-09-01)
Execution Driver: native-0.2
First of all, I don't know, how to set quota per container. Should I use maybe flgas in docker run commands?
With devicemapper as storage plugin, you can not set disk size per container. It would be of a fixed size for each container. And as per the output of docker info it suggests that the fixed size would be of around 100GB. However, you can have one of the following 2 options depending on your requirement.
a.) You can change this fixed size from 100GB to some other value like 20GB but in that case as well all the containers would be having fixed disk size as 20GB. If you want go ahead with this option, you can follow this:
Stop docker service, sudo service docker stop
Remove the existing docker directory (which in your case is default
one i.e. /var/lib/docker)-- NOTE this will delete all your
existing docker images and containers.
Start docker daemon with option docker daemon -s devicemapper --storage-opt dm.basesize=20G
Or in place of step 3, add option DOCKER_OPTS='-g /var/lib/docker -s devicemapper --storage-opt dm.basesize=5G'in the file /etc/default/docker and restart docker service
sudo service docker start
Now, whatever containers you will spawn, would be having disk size as 20GB.
b.) As a second option, you can increase disk size of your existing containers from whatever base disk size you have set. (which by default is 100GB and if you follow first option, then 20GB). To do this, here is a very useful article you can follow. This may help in letting you set different disk size for different containers.
Hope this answer is useful for your requirement, thanks.

Resources