I am running Ubuntu 20.04 and using zfs on my system drive.
I am trying to remove a docker container but I get this error:
glen $ docker rm c3250e315b06
Error response from daemon: container c3250e315b0631cc7fee17ab0c7f649a3995ea17e969705117e064a045b3775e: driver "zfs" failed to remove root filesystem: exit status 1: "/usr/sbin/zfs fs destroy -r rpool/ROOT/ubuntu_bl0u7i/var/lib/120f50d109cf1c84f20db9e6402fef9a4bd91fa8b94f1848a874539663bbdc40" => cannot destroy 'rpool/ROOT/ubuntu_bl0u7i/var/lib/120f50d109cf1c84f20db9e6402fef9a4bd91fa8b94f1848a874539663bbdc40': filesystem has dependent clones
use '-R' to destroy the following datasets:
rpool/ROOT/ubuntu_bl0u7i/var/lib/38ff67538bf4b2ccfef54cfeb55847cf6da6bee70a6bf2e5b063ab0e5820c0fd
rpool/ROOT/ubuntu_bl0u7i/var/lib/120f50d109cf1c84f20db9e6402fef9a4bd91fa8b94f1848a874539663bbdc40-init
I have no idea where to start with the error.
Can anyone help?
Edit:
I fixed it from this comment: https://github.com/moby/moby/issues/36967#issuecomment-676698563
but it nuked all my containers
I'm not sure how to do it through Docker, but ZFS is telling you that filesystem rpool/ROOT/ubuntu_bl0u7i/var/lib/120f50d...bbdc40 had a couple clones created from snapshots on that filesystem. For the sake of argument let's say there's just one, and the cloned filesystem is called clone1, which was created off of snapshot1 on the rpool/...bbdc40 filesystem. So your hierarchy is like this:
rpool/...bbdc40 -> rpool/...bbdc40#snapshot1 -> clone1
The problem is that clone1 is still referencing data from snapshot1, so you can't delete the snapshot, which prevents you from deleting the original filesystem.
However, ZFS allows you to change who the "parent" filesystem is by using the zfs promote command, which lets you change the hierarchy to this:
clone1 -> clone1#snapshot1 -> rpool/...bbdc40
Now nobody is depending on the data in rpool/...bbdc40 (because the snapshot has been moved to be on the newly promoted parent, clone1), so you can delete it.
(That said, Docker probably assumes that it has full control over the state for its filesystems, so if you go around running random ZFS commands it risks making Docker sad and confused. Use at your own risk.)
Related
I saw this post with different solutions for standard docker installation:
How to change the default location for "docker create volume" command?
At first glance I struggle to repeat the steps to change the default mount point for the rootless installation.
Should it be the same? What would be the procedure?
I just got it working. I had some issues because I had the service running while trying to change configurations. Key takeaways:
The config file is indeed stored in ~/.config/docker/. One must make a daemon.json file here in order to change preferences. We would like to change the data-root option (and storage-driver, in case the drive does not have capabilities
To start and stop the headless service one runs systemctl --user [start | stop] docker.
a. Running the systemwide service starts a parallel and separate instance of docker, which is not rootless.
b. When stopping make sure to stop the docker.socketfirst.
Sources are (see Useage section for rootless)
and (config file information)
We ended up with the indirect solution. We have identified the directory where the volumes are mounted by default and created a symbolic link which points to the place where we actually want to store the data. In our case it was enough. Something like that:
sudo ln -s /data /home/ubuntu/.local/share/docker/volumes"
I ran a container and it was missing command alias like ll. So I Typed alias ll="ls -lta" in the terminal while I was inside the container. After that, I ran docker commit to commit changes to the container and image. I got a new image (outside container), deleted the old image and ran a new container from the image I committed to. But was not able to use ll alias. What am I missing here?
Container state is only persisted through files.
alias ll="ls -lts" made no file changes and thus no state change was persisted by the docker commit....
You may achieve the result you intend by editing one of the files that the shell uses to define its state when opened, e.g. e.g. ~/.bashrc and ~/.bash_profile. You'll need to determine which to use for your environment|OS.
Say there is an image A described by the following Dockerfile:
FROM bash
RUN mkdir "/data" && echo "FOO" > "/data/test"
VOLUME "/data"
I want to specify an image B that inherites from A and modifies /data/test. I don't want to mount the volume, I want it to have some default data I specify in B:
FROM A
RUN echo "BAR" > "/data/test"
The thing is that the test file will maintain the content it had at the moment of VOLUME instruction in A Dockerfile. B image test file will contain FOO instead of BAR as I would expect.
The following Dockerfile demonstrates the behavior:
FROM bash
# overwriting volume file
RUN mkdir "/volume-data" && echo "FOO" > "/volume-data/test"
VOLUME "/volume-data"
RUN echo "BAR" > "/volume-data/test"
RUN cat "/volume-data/test" # prints "FOO"
# overwriting non-volume file
RUN mkdir "/regular-data" && echo "FOO" > "/regular-data/test"
RUN echo "BAR" > "/regular-data/test"
RUN cat "/regular-data/test" # prints "BAR"
Building the Dockerfile will print FOO and BAR.
Is it possible to modify file /data/test in B Dockerfile?
It seems that this is intended behavior.
Changing the volume from within the Dockerfile: If any build steps change the data within the volume after it has been declared, those changes will be discarded.
VOLUMEs are not part of your IMAGE. So what is the use case to seed data into it. When you push the image into another location, it is using an empty volume at start. The dockerfile behaviour does remind you of that.
So basically, if you want to keep the data along with the app code, you should not use VOLUMEs. If the volume declaration did exist in the parent image then you need to remove the volume before starting your own image build. (docker-copyedit).
There are a few non-obvious ways to do this, and all of them have their obvious flaws.
Hijack the parent docker file
Perhaps the simplest, but least reusable way, is to simply use the parent Dockerfile and modify that. A quick Google of docker <image-name:version> source should find the github hosting the parent image Dockerfile. This is good for optimizing the final image, but destroyes the point of using layers.
Use an on start script
While a Dockerfile can't make further modifications to a volume, a running container can. Add a script to the image, and change the Entrypoint to call that (and have that script call the original entry point). This is what you will HAVE to do if you are using a singleton-type container, and need to partially 'reset' a volume on start up. Of course, since volumes are persisted outside the container, just remember that 1) your changes may already be made, and 2) Another container started at the same time may already be making those changes.
Since volumes are (virtually) forever, I just use one time setup scripts after starting the containers for the first time. That way I easily control when the default data is setup/reset. (You can use docker inspect <volume-name> to get the host location of the volume if you need to)
The common middle ground on this one seems to be to have a one-off image whose only purpose is to run once to do the volume configurations, and then clean it up.
Bind to a new volume
Copy the contents of the old volume to a new one, and configure everything to use the new volume instead.
And finally... reconsider if Docker is right for you
You probably already wasted more time on this than it was worth. (In my experience, the maintenance pain of Docker has always far outweighed the benefits. However, Docker is a tool, and with any tool, you need to sometimes take a moment to reflect if you are using it right, and if there are better tools for the job.)
I am running Docker Toolbox v. 1.13.1a on Windows 7 Pro Service pack 1 x64OS.
with Virtual Box Version 5.1.14 r112924
when I try to run any docker image e.g. official postgres image from Docker Hub with volumes disabled, it works fine!
But when I enable the volumes it fails.
I tried all official documentations
The VM has shared folder as required and has full access to it also
shared folder screenshot
In case of my example of postgresql it crashes with following log
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... LOG: could not link file "pg_xlog/xlogtemp.27" to "pg_xlog/000000010000000000000001": Operation not permitted
FATAL: could not open file "pg_xlog/000000010000000000000001": No such file or directory
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
I know its the problem with folder permissions. But kinda stuck!
A ton of thanks in advance
I've been busy with this problem all day and my conclusion that it's currently simply not possible to run postgresql inside a docker container while keeping your data persistent in a separate volume.
I even tried running the container without linking to a volume and copying the data that was originally in /var/lib/postgresql into a folder of my host OS (Windows 10 Home), then copy that into the folder that got then linked to the container itself.
Alas, I got the next error:
FATAL: data directory "/var/lib/postgresql/data/pgadmin" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
In conclusion: There's something going wrong with the ownership and the correct user owning it and to be able to fix it, you'll need a unix commandline on Windows that is able to run docker (something currently not possible with Bash on Ubuntu on Windows that is running using Ubuntu 16.04 binaries).
Maybe, in the future, you'll be able to run the needed commands (found here, under Arbitrary --user Notes), but these are *nix commands and powershell (started by Kitematic) can't run those. Bash for Ubuntu for Windows could run those, but that shell has no connection to the docker daemon/service on windows...
TL;DR: Lost a day of work: It is currently impossible on Windows.
I have been trying to fix this issue also ..
At first I thought it was a symlink problem (because the first error fails on " could not link .. operation not permitted)
To be sure symlink is permitted you have to :
share a folder in virtualbox
run virtualbox as administrator (if you account is in administrator group) Right click virtualbox.exe and select run as Administrator
if your account is not administrator, add the symlink privilege with secpol.msc > "Local Policies-User Rights Assignments" add your user to "Create symbolic links"
enable symlink for your shared folder in virtualbox :
VBoxManage setextradata VM_NAME VBoxInternal2/SharedFoldersEnableSymlinksCreate/SHARED_FOLDER_NAME 1
Alternatively you can also use the c:\User\username folder which is shared and symlink enabled by default dockertools installation
Now I can create symlinks in the shared folder from the docker container .. but I still have the same error "could not link ... operation not permitted"
So the reason must be somewhere else ... in the file permissions as you said but I do not see why ?
So, I have been doing something I knew would end up bad, I used the docker rm -f flag to remove an image B because it kept on complaining that another image A was still using it and I didn't find that specific image A, couldn't see it, so I thought, I'll use '-f'.
Unfortunately, of course it did exist, and now I cannot remove image A anymore because I removed its dependency image B and I keep on getting
Error response from daemon: No such id <id-of-already-removed-image-with--f-option>
error: failed to remove one or more images
when I try to remove it.
So basically, can I remove this image that points to an image that's not present anymore?
Switched off docker through
service docker.io stop
changed the DOCKER_OPTS in /etc/default/docker.io to
DOCKER_OPTS="--graph=/home/kasper/dockerrepo"
restarted docker through
service docker.io start
and then removed Var/lib/docker. Didn't figure out a finer grained way of cleaning this up.