How to extend a logical volume which has snapshot on it - storage

I have a lv(logical volume) which is a original source of another snapshot. And I wanna extend the lv with command 'lvextend' and the first try failed with error:
Snapshot origin volumes can be resized only while inactive
So I tried the second way with a command sequence.
1. unmount [mount_path]
2. deactive the device with command(lvchange -an [device_path])
3. lvextend [device_path]
then I get the error:
LV [device_namr] has open snapshot [snapshot_name]: not deactivating
What should I do, how can I extend a lv with snapshot exists?

You must deactivate your snapshot too, with command:
lvchange -an [snapshot_path]
and when both of volume and snapshot are inactive, you can extend your volume.

You don't need to deactivate the volume that is mounted to / you can resize it to occupy the empty space on your disk while the volume is mounted mounted
To deactivate your snapshot do
lvchange -an /path/to/snapshot
To extend the voulume that's mapped to / use the --resizefs option of lvextend

Related

Change default volume mount point for docker rootless?

I saw this post with different solutions for standard docker installation:
How to change the default location for "docker create volume" command?
At first glance I struggle to repeat the steps to change the default mount point for the rootless installation.
Should it be the same? What would be the procedure?
I just got it working. I had some issues because I had the service running while trying to change configurations. Key takeaways:
The config file is indeed stored in ~/.config/docker/. One must make a daemon.json file here in order to change preferences. We would like to change the data-root option (and storage-driver, in case the drive does not have capabilities
To start and stop the headless service one runs systemctl --user [start | stop] docker.
a. Running the systemwide service starts a parallel and separate instance of docker, which is not rootless.
b. When stopping make sure to stop the docker.socketfirst.
Sources are (see Useage section for rootless)
and (config file information)
We ended up with the indirect solution. We have identified the directory where the volumes are mounted by default and created a symbolic link which points to the place where we actually want to store the data. In our case it was enough. Something like that:
sudo ln -s /data /home/ubuntu/.local/share/docker/volumes"

zfs: filesystem has dependent clones

I am running Ubuntu 20.04 and using zfs on my system drive.
I am trying to remove a docker container but I get this error:
glen $ docker rm c3250e315b06
Error response from daemon: container c3250e315b0631cc7fee17ab0c7f649a3995ea17e969705117e064a045b3775e: driver "zfs" failed to remove root filesystem: exit status 1: "/usr/sbin/zfs fs destroy -r rpool/ROOT/ubuntu_bl0u7i/var/lib/120f50d109cf1c84f20db9e6402fef9a4bd91fa8b94f1848a874539663bbdc40" => cannot destroy 'rpool/ROOT/ubuntu_bl0u7i/var/lib/120f50d109cf1c84f20db9e6402fef9a4bd91fa8b94f1848a874539663bbdc40': filesystem has dependent clones
use '-R' to destroy the following datasets:
rpool/ROOT/ubuntu_bl0u7i/var/lib/38ff67538bf4b2ccfef54cfeb55847cf6da6bee70a6bf2e5b063ab0e5820c0fd
rpool/ROOT/ubuntu_bl0u7i/var/lib/120f50d109cf1c84f20db9e6402fef9a4bd91fa8b94f1848a874539663bbdc40-init
I have no idea where to start with the error.
Can anyone help?
Edit:
I fixed it from this comment: https://github.com/moby/moby/issues/36967#issuecomment-676698563
but it nuked all my containers
I'm not sure how to do it through Docker, but ZFS is telling you that filesystem rpool/ROOT/ubuntu_bl0u7i/var/lib/120f50d...bbdc40 had a couple clones created from snapshots on that filesystem. For the sake of argument let's say there's just one, and the cloned filesystem is called clone1, which was created off of snapshot1 on the rpool/...bbdc40 filesystem. So your hierarchy is like this:
rpool/...bbdc40 -> rpool/...bbdc40#snapshot1 -> clone1
The problem is that clone1 is still referencing data from snapshot1, so you can't delete the snapshot, which prevents you from deleting the original filesystem.
However, ZFS allows you to change who the "parent" filesystem is by using the zfs promote command, which lets you change the hierarchy to this:
clone1 -> clone1#snapshot1 -> rpool/...bbdc40
Now nobody is depending on the data in rpool/...bbdc40 (because the snapshot has been moved to be on the newly promoted parent, clone1), so you can delete it.
(That said, Docker probably assumes that it has full control over the state for its filesystems, so if you go around running random ZFS commands it risks making Docker sad and confused. Use at your own risk.)

"Device or resource busy" when i try move /etc/resolv.conf in ubuntu:18.04. How fix it?

I have a VPN client in my Docker container (ubuntu:18.04).
The client must do the following:
mv /etc/resolv.conf /etc/resolv.conf.orig
Then the client should create new /etc/resolv.conf with their DNS servers. However, the move fails with an error:
mv: cannot move '/etc/resolv.conf' to '/etc/resolv.conf.orig': Device or resource busy
Can this be fixed? Thank you advance.
P.S.: I can 't change the VPN client code.
Within the Docker container the /etc/resolv.conf file is not an ordinary regular file. Docker manages it in a special manner: the container engine writes container-specific configuration into the file outside of the container and bind-mounts it to /etc/resolv.conf inside the container.
When your VPN client runs mv /etc/resolv.conf /etc/resolv.conf.orig, things boil down to the rename(2) syscall (or similar call from this family), and, according to the manpage for this syscall, EBUSY (Device or resource busy) error could be returned by few reasons, including the situation when the original file is a mountpoint:
EBUSY
The rename fails because oldpath or newpath is a directory that is in use by some process (perhaps as current working directory, or as root directory, or
because it was open for reading) or is in use by the system (for example as mount point), while the system considers this an error. (Note that there is no
requirement to return EBUSY in such cases — there is nothing wrong with doing the rename anyway — but it is allowed to return EBUSY if the system cannot otherwise handle such situations.)
Though there is a remark that the error is not guaranteed to be produced in such circumstances, it seems that it always fires for bind-mount targets (I guess that probably this happens here):
$ touch sourcefile destfile
$ sudo mount --bind sourcefile destfile
$ mv destfile anotherfile
mv: cannot move 'destfile' to 'anotherfile': Device or resource busy
So, similarly, you cannot move /etc/resolv.conf inside the container, for it is a bind-mount, and there is no straight solution.
Given that the bind-mount of /etc/resolv.conf is a read-write mount, not a read-only one, it is still possible to overwrite this file:
$ mount | grep resolv.conf
/dev/sda1 on /etc/resolv.conf type ext4 (rw,relatime)
So, the possible fix could be to try copying this file to the .orig backup and then rewriting the original one instead of renaming the original file and then re-creating it.
Unfortunately, this does not meet your restrictions (I can 't change the VPN client code.), so I bet that you are out of luck here.
Any method that requires moving a file onto /etc/resolv.conf fails in docker container.
The workaround is to rewrite the original file instead of moving or renaming a modified version onto it.
For example, use the following at a bash prompt:
(rc=$(sed 's/^\(nameserver 192\.168\.\)/# \1/' /etc/resolv.conf)
echo "$rc" > /etc/resolv.conf)
This works by rewriting /etc/resolv.conf as follows:
read and modify the current contents of /etc/resov.conf through the stream editor, sed
the sed script in this example is for commenting out lines starting with nameserver 192.168.
save the updated contents in a variable, rc
overwrite the original file /etc/resolv.conf with updated contents in "$rc"
The command list is in parentheses to operate in a sub-shell to avoid polluting the current shell's name space with a variable name rc, just in case it happens to be in use.
Note that this command does not require sudo since it is taking advantage of the super user privileges available by default inside the container.
Note that sed -i (editing in-place) involves moving the updated file onto the original and will not work.
But if the visual editor, vi, is available in the container, editing and saving /etc/resolv.conf with vi works, since vi modifies the original file directly.

Dockerfile build error and writes to another folder

I made a dockerfile like this
FROM hyeshik/tailseeker:latest
RUN rm /opt/tailseeker/conf/defaults.conf
COPY /Users/Downloads/defaults.conf /opt/tailseeker/conf/
COPY /Users/Downloads/level2/* /opt/tailseeker/refdb/level2/
COPY /Users/Downloads/level3/* /opt/tailseeker/refdb/level3/
My /Users/Downloads/ folder also has other folders named input
When I ran
docker build -f /Users/Downloads/Dockerfile /Users/Downloads/
I get an error saying
Sending build context to Docker daemon 126.8 GB
Error response from daemon: Error processing tar file(exit status 1): write /input/Logs/Log.00.xml: no space left on device
One strange thing here is why is it trying to write to the input folder? And the other one is why does it complain about no space left on device. I have a 1TB disk and only 210GB of it is used. I also used qemu-img and resized my Docker.qcow2. Here is the info of my Docker.qcow2
image:/Users/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
file format: qcow2
virtual size: 214G (229780750336 bytes)
disk size: 60G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: true
refcount bits: 16
corrupt: false
Can anyone please help me to copy the contents from my /Users/Downloads folder into the docker image by using that dockerfile above?
Thanks in advance.
build starts by creating a tarball from the context directory (in your case /Users/Downloads/) and sending that tarball to the server. The tarball is created in the tmp directory, which is probably why you're running out of space when trying to build.
When you're working with large datasets the recommended approach is to use a volume. You can use a bind mount volume to mount the files from the host.
If the files you're trying to add aren't that large, you might need to use a .dockerignore to ignore other files under /Users/Downloads.
You can also start the docker daemon with an alternative temp directory using $DOCKER_TMPDIR

What is the difference between save and export in Docker?

I am playing around with Docker for a couple of days and I already made some images (which was really fun!). Now I want to persist my work and came to the save and export commands, but I don't fully understand them.
What is the difference between save and export in Docker?
The short answer is:
save will fetch an image : for a VM or a physical server, that would be the installation .ISO image or disk. The base operating system.
It will pack the layers and metadata of all the chain required to build the image. You can then load this "saved" images chain into another docker instance and create containers from these images.
export will fetch the whole container : like a snapshot of a regular VM. Saves the OS of course, but also any change you made, any data file written during the container life. This one is more like a traditional backup.
It will give you a flat .tar archive containing the filesystem of your container.
Edit: as my explanation may still lead to confusion, I think that it is important to understand that one of these commands works with containers, while the other works with images.
An image has to be considered as 'dead' or immutable, starting 0 or 1000 containers from it won't alter a single byte. That's why I made a comparison with a system install ISO earlier. It's maybe even closer to a live-CD.
A container "boots" the image and adds an additional layer on top of it. This layer stores any change on the container (created/changed/removed files...).
There are two main differences between save and export commands.
save command saves whole image with history and metadata but export command exports only files structure (without history and metadata). So the exported tar file will be smaller then the saved one.
When you use exported file system for creating a new image then this new image will not contain any USER, EXPOSE, RUN etc. commands from your Dockerfile. Only file structure will be transferred.
So when you are using mentioned keywords in your Dockerfile then you cannot use export command for transferring image to another machine - you need always use save command.
export: container (filesystem)->image tar.
import: exported image tar-> image. Only one layer.
save: image-> image tar.
load: saved image tar->image. All layers will be recovered.
From Docker in Action, Second Edition p190.
Layered images maintain the history of the image, container-creation metadata, and old files that might have been deleted or overridden.
Flattened images contain only the current set of files on the filesystem.
The exported image will not have any layer or history information saved, so it will be smaller and you will not be able to rollback.
The saved image will have layer and history information, so larger.
If giving this to a customer, the Q is do you want to keep those layers or not?
Technically, save/load works with repositories which can be one or more of images, also referred to as layers. An image is a single layer within a repo. Finally, a container is an instantiated image (running or not).
Docker save Produces a tar file repo which contains all parent layers, and all tags + versions, or specified repo:tag, for each argument provided from image.
Docker export Produces specified file(can be tar or tgz) with flat contents without contents of specified volumes from Container.
docker save need to use on docker image while docker export need to use on container(just like running image)
Save Usage
docker save [OPTIONS] IMAGE [IMAGE...]
Save an image(s) to a tar archive (streamed to STDOUT by default)
--help=false Print usage -o, --output="" Write to a file,
instead of STDOUT
export Usage
docker export [OPTIONS] CONTAINER
Export the contents of a container's filesystem as a tar archive
--help=false Print usage -o, --output="" Write to a file,
instead of STDOUT

Resources