How do I move and rename files in dockerswarm - docker-swarm

I am logged in as root
the file is config.jason is in ~/.docker directory and i just want to move it to the root folder

I don't think I fully understand the problem, but I will try to answer what I think you might be looking for.
Whenever you want to change any settings/files/information from the docker & docker/swarm files/folders you should stop the docker service, do the changes in config and such and then start the service back again.
For instance adjusting the /etc/docker/daemon.json:
sudo service docker stop
sudo vi /etc/docker/daemon.json
# make the changes
sudo service docker start
In case you want to MOVE any actual files or switch directories I personally use rsync to do so, which works GREAT.
Example:
sudo service docker stop
sudo rsync -aP /var/lib/docker/ /opt/docker
sudo service docker start
Hope it helps

Related

Docker - mkdir read-only file system

After freshly installing Ubuntu 18 I am receiving the following error when trying to launch a docker container that has a bind to a LVM (ext4) partition:
mkdir /storage: read-only file system
I have tried reinstalling the OS, reinstalling Docker and forcing the drive to mount as RW (everything that isn't docker can write to the drive).
The directory that is being bound is currently set to 777 permissions.
There seems to be almost no information available for this error.
I had same issue, but removed docker from snap and reinstall on following the official docker steps.
Remove docker from snap
snap remove docker
then remove the docker directory, and old version
rm -R /var/lib/docker
sudo apt-get remove docker docker-engine docker.io
install official docker: https://docs.docker.com/install/linux/docker-ce/ubuntu/
I hope this help for you!
Update 01/2021: while still pretty cool, Snaps don't always work. Specifically with the Docker Snap, it didn't work for Swarm mode, so I ditched it and installed Docker the recommended way.
Snaps are actually pretty cool, IMO, and think it's beneficial to run Docker within a Snap than installing it directly on the system. The fact that you're getting a read-only permissions error is a good thing. It means that a rogue container isn't able to wreak havoc on your base OS. That said, how to fix your issue.
The reason that this is coming up is that Snaps will expose the host OS as read-only so that Docker can see the host's files, but not modify them (hence the permission denied error). But there is a directory that the Docker Snap can write to: /var/snap/docker. Actually, a better directory that snap can write to is /home. I created /home/docker for container's to have persistent storage from the host system.
In your case, you wanted /storage to be writeable by Docker containers. I had a very similar use-case, which led me to this SO post. I solved this by mounting my storage within the docker snap directory /home/docker; the easiest example simply being a directory on the same filesystem:
mkdir -p /home/docker/<container name>/data
In my case, I created a ZFS dataset at the location above instead of simply mkdir'ing a directory.
Then, the container I ran could write to that with something like:
docker run -ti -v /home/docker/<container name>/data:/data [...]
Now you have the best of both worlds: Docker running in a contained Snap environment and persistent storage. 🙌🏽
To solve this, create/run you container with --privileged:
ex.:
docker run --privileged -i --name master --hostname k8s-master -d ubuntu:20.04

Docker tries to mkdir the folder that I mount

Why is Docker trying to create the folder that I'm mounting? If I cd to C:\Users\szx\Projects
docker run --rm -it -v "${PWD}:/src" ubuntu /bin/bash
This command exits with the following error:
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: error while creating mount source path '/c/Users/szx/Projects': mkdir /c/Users/szx/Projects: file exists.
I'm using Docker Toolbox on Windows 10 Home.
For anyone running mac/osx and encountering this, I restarted docker desktop in order to resolve this issue.
Edit: It would appear this also fixes the issue on Windows 10
My trouble was a fuse-mounted volume (e.g. sshfs, etc.) that got mounted again into the container. I didn't help that the fuse-mount had the same ownership as the user inside the container.
I assume the underlying problem is that the docker/root supervising process needs to get a hold of the fuse-mount as well when setting up the container.
Eventually it helped to mount the fuse volume with the allow_other option. Be aware that this opens access to any user. Better might be allow_root – not tested, as blocked for other reasons.
I got this error after changing my Windows password. I had to go into Docker settings and do "Reset credentials" under "Shared Drives", then restart Docker.
Make sure the folder is being shared with the docker embedded VM. This differs with the various types of docker for desktop installs. With toolbox, I believe you can find the shared folders in the VirtualBox configuration. You should also note that these directories are case sensitive. One way to debug is to try:
docker run --rm -it -v "/:/host" ubuntu /bin/bash
And see what the filesystem looks like under "/host".
I have encountered this problem on Docker (Windows) after upgrading to 2.2.0.0 (42247). The issue was with casing in the folder name that I've provided in my arguments to docker command.
Did you use this container before? You could try to remove all the docker-volumes before re-executing your command.
docker volume rm `(docker volume ls -qf dangling=true)`
I tried your command locally (MacOS) without any error.
I met this problem too.
I used to run the following command to share the folder with container
docker run ... -v c:/seleniumplus:/dev/seleniumplus ...
But it cannot work anymore.
I am using the Windows 10 as host.
My docker has recently been upgraded to "19.03.5 build 633a0e".
I did change my windows password recently.
I followed the instructions to re-share the "C" drive, and restarted the docker and even restarted the computer, but it didn't work :-(.
All of sudden, I found that the folder is "C:\SeleniumPlus" in the file explorer, so I ran
docker run ... -v C:/SeleniumPlus:/dev/seleniumplus ...
And it did work. So it is case-sensitive when we specify the windows shared folder in the latest docker ("19.03.5 build 633a0e").
I am working in Linux (WSL2 under Windows, to be more precise) and my problem was that there existed a symlink for that folder on my host:
# docker run --rm -it -v /etc/localtime:/etc/localtime ...
docker: Error response from daemon: mkdir /etc/localtime: file exists.
# ls -al /etc/localtime
lrwxrwxrwx 1 root root 25 May 23 2019 /etc/localtime -> ../usr/share/zoneinfo/UTC
It worked for me to bind mount the source /usr/share/zoneinfo/UTC instead.
I had this issue when I was working with Docker in a CryFS -encrypted directory in Ubuntu 20.04 LTS. The same probably happens in other UNIX-like OS-es.
The problem was that by default the CryFS-mounted virtual directory is not accessible by root, but Docker runs as root. The solution is to enable root access for FUSE-mounted volumes by editing /etc/fuse.conf: just comment out the use_allow_other setting in it. Then mount the encrypted directory with the command cryfs <secretdir> <opendir> -o allow_root (where <secretdir> and <opendir> are the encrypted directory and the FUSE mount point for the decrypted virtual directory, respectively).
Credits to the author of this comment on GitHub for calling my attention to the -o allow_root option.
Had the exact error. In my case, I used c instead of C when changing into my directory.
I solved this by restarting docker and rebuilding the images.
I have put the user_allow_other in /etc/fuse.conf.
Then mounting as in the example below has solved the problem.
$ sshfs -o allow_other user#remote_server:/directory/
I had this issue in WSL, likely caused by leaving some containers alive too long. None of the advice here worked for me. Finally, based on this blog post, I managed to fix it with the following commands, which wipe all the volumes completely to start fresh.
docker-compose down
docker rm -f $(docker ps -a -q)
docker volume rm $(docker volume ls -q)
docker-compose up
Then, I restarted WSL (wsl --shutdown), restarted docker desktop, and tried my command again.
In case you work with a separate Windows user, with which you share the volume (C: usually): you need to make sure it has access to the folders you are working with -- including their parents, up to your home directory.
Also make sure that EFS (Encrypting File System) is disabled for the shared folders.
See also my answer here.
I had the same issue when developing using docker. After I moved the project folder locally, Docker could not mount files that were listed with relatives paths, and tried to make directories instead.
Pruning docker volumes / images / containers did not solve the issue. A simple restart of docker-desktop did the job.
This error crept up for me because the problem was that my docker-compose file was looking for the APPDATA path on my machine on mac OS. MacOS doesn't have an APPDATA environment variable so I just created a .env file with the contents:
APPDATA=~/Library/
And my problem was solved.
I faced this error when another running container was already using folder that is being mounted in docker run command. Please check for the same & if not needed then stop the container. Best solution is to use volume by using following command -
docker volume create
then Mount this created volume if required to be used by multiple containers..
For anyone having this issue in linux based os, try to remount your remote folders which are used by docker image. This helped me in ubuntu:
sudo mount -a
I am running docker desktop(docker engine v20.10.5) on Windows 10 and faced similar error. I went ahead and removed the existing image from docker-desktop UI, deleted the folder in question(for me deleting the folder was an option because i was just doing some local testing), removed the existing container, restarted the docker and it worked
In my case my volume path (in a .env file for docker-compose) had a space in it
/Volumes/some\ thing/folder
which did work on Docker 3 but didn't after updating to Docker 4. So I had to set my env variable to :
"/Volumes/some thing/folder"
I had this problem when the directory on my host was inside a directory mounted with gocryptfs. By default even root can't see the directory mounted by gocryptfs, only the user who executed the gocryptfs command can. To fix this add user_allow_other to /etc/fuse.conf and use the -allow_other flag e.g. gocryptfs -allow_other encrypted mnt
In my specific instance, Windows couldn't tell me who owned my SSL certs (probably docker). I took control of the SSL certs again under Properties, added read permission for docker-users and my user, and it seemed to have fixed the problem. After tearing my hair out for 3 days with just the Daemon: Access Denied error, I finally got a meaningful error regarding another answer above "mkdir failed" or whataever on a mounted file (the SSL cert).

Move Docker /var/run/docker data to different directory

I followed the following tutorial to transfer and permanently move where docker saves data previously inside /usr/bin: https://linuxconfig.org/how-to-move-docker-s-default-var-lib-docker-to-another-directory-on-ubuntu-debian-linux
However upon restarting docker and rebuilding all containers, there seems to be activity in /var/run/docker/containerd/ which I was previously trying to work around. I was hoping to have all things docker saved in a specific directory not in /var/run along with my newly created docker directory to replace /usr/bin/docker
Note: df -h did in fact prove that I am out of space in the base directory where /usr/bin and /var/run exists. I am trying to navigate all docker items to a sub directory under /opt
How do I move all things Docker to a different directory?
(Answer) Found in documentation: https://docs.docker.com/config/daemon/systemd/#runtime-directory-and-storage-driver
As described in the Docker documentation, to set the docker daemon directory to <folder>:
Create /etc/docker/daemon.json with the following contents:
{
"data-root": "<folder>",
"storage-driver": "overlay2"
}
Restart the docker daemon.
Note that this will not move existing docker data over to the target folder - you will need to handle that (or start from scratch).

Making docker container write files that the host machine can delete

I have a docker-based build environment - in order to build my project, I run a docker container with the --volume parameter, so it can access my project directory and build it.
The problem is that the files created by the container cannot be deleted by the host machine. The only workaround I currently have is to start an interactive container with the directory mounted and delete it.
Bottom line question: It is possible to make docker write to the mounted area files with permissions such that the host can later delete them?
This has less to do with Docker and more to do with basic Unix file permissions. Your docker containers are running as root, which means any files created by the container are owned by root on your host. You fix this the way you fix any other file permission problem, by either (a) ensuring that that the files/directories are created with your user id or (b) ensuring that permissions allow you do delete the files even if they're not owned by you or (c) using elevated privileges (e.g., sudo rm ...) to delete the files.
Depending on what you're doing, option (a) may be easy. If you can run the contanier as a non-root user, e.g:
docker run -u $UID -v $HOME/output:/some/container/path ...
...then everything will Just Work, because the files will be created with your userid.
If the container must run as root initially, you may be able to take care of root actions in your ENTRYPOINT or CMD script, and then switch to another uid to run the main application. To do this, you would need to pass your user id into the container (e.g., as an environment variable), and then later use something like runuser to switch to the new userid:
exec runuser -u $TARGE_UID /some/command
If neither of the above is an option, then sudo rm -rf mydirectory should work just as well as spinning up an interactive container.
If you need your build artifacts just to put them to the docker image on the next stage then it is probably worth to use multi-stage build option.

How do I move a docker container's image to a persistent disk?

We have noticed that our containers are taking up a lot of space, one of the reasons for this is the images.
We would like to move the images.
I know right now they are stored in
/var/lib/docker/graph/<id>/layer
Is there a way to move these to another location/persistent disk?
To move images to another drive or another server:
docker save image_name > image_name.tar
mv image_name.tar /somewhere/else/
Load it back into docker
docker load < image_name.tar
Reference.
Here's any easy way to move docker's data:
sudo service docker stop
sudo mv /var/lib/docker /a/new/location
sudo ln -s /a/new/location /var/lib/docker # Create a symbolic link
sudo service docker start
No need to change DOCKER_OPTS or use -g /path.
You can always mount /var/lib/docker to a different disk. Otherwise, you can start the daemon with -g /path in order to tell docker to use a different directory for storage.
Using the answer by #creack I did the following on my Ubuntu install to move my entire docker images/containers folder to a new location/disk. The great thing about doing this is that any new images that I install will then use the new disk location.
First stop the docker service:
sudo service docker stop
Then move the docker folder from the default location to your target location:
sudo mv /var/lib/docker /thenewlocation
Then edit the /etc/default/docker file, inserting/amending the following line which provides the new location as an argument for the docker service:
DOCKER_OPTS="-g /thenewlocation/docker"
Restart the docker service:
sudo service docker start
This worked 100% for me - all my images remained in tact.

Resources