I am trying to execute ubuntu in docker. I use this command docker run -it ubuntu, and I want to install some packages and store some files. I know about volumes, but I have used it only in docker-compose. Is it possible to store all the container's data or how can I do that properly?
when you run a container, Docker creates a namespace and loads the image filesystem in that namespace. any changes you apply in a running container including installing some packages only remains during the lifetime of the container if you remove the container and rerun it they're gone.
if you want to your changes be permanent you have to commit the running container and actually create an image for that using this command:
As David pointed out in the comments
You should pretty much never run docker commit. It leads to images that can't be reproduced, and you'll be in trouble if there's a security fix you're required to take a year down the road.
sudo docker commit [CONTAINER_ID] [new_image_name]
if you have an app inside the container like MySQL and wants the data stored in that app be permanent you should map a volume from the host like this:
docker run -d -v /home/username/mysql-data:/var/lib/mysql --name mysql mysql
I've been trying the whole day to accomplish a simplistic example of sharing a Windows directory to Linux container running on Windows Docker host.
Have read all the guidelines and run the following:
docker run -it --rm -p 5002:80 --name mount-test --mount type=bind,source=D:\DockerArea\PortScanner,target=/app/PortScannerWorkingDirectory barebonewebapi:latest
The origin PortScanner directory on host machine has got some text file in it. The container is created successfully.
The issue is that when I'm trying to
docker exec -it mount-test /bin/bash
and then list the mounted directory in the container PortScannerWorkingDirectory - it just shows that it's empty. Nor the C# code can read the contents of the host file in the mapped directory.
Am I missing something simple here? I feel like I got stuck and can't share files on the host Windows machine to Linux container.
After several days of dealing with the issue I've found quite apparent answer. Although I had had C and D drives already shared to Docker in Docker settings I did an experiment and re-shared both drives (there's a special button Reset Credentials for that purpose in Docker agent settings for Windows). After that the issue is resolved. So saving it here with the hope that it may help someone else since this seems to be a glitch with permissions or similar.
The issue is quite hard to diagnose - when there's an issue the Docker container just silently writes into its writable layer and no error pops up.
Go to the docker settings -> shared drives -> reset credentials.
and then click the drive and click apply button.
then execute following command as suggested by docker
docker run --rm -v c:/Users:/data alpine ls /data
Why is Docker trying to create the folder that I'm mounting? If I cd to C:\Users\szx\Projects
docker run --rm -it -v "${PWD}:/src" ubuntu /bin/bash
This command exits with the following error:
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: error while creating mount source path '/c/Users/szx/Projects': mkdir /c/Users/szx/Projects: file exists.
I'm using Docker Toolbox on Windows 10 Home.
For anyone running mac/osx and encountering this, I restarted docker desktop in order to resolve this issue.
Edit: It would appear this also fixes the issue on Windows 10
My trouble was a fuse-mounted volume (e.g. sshfs, etc.) that got mounted again into the container. I didn't help that the fuse-mount had the same ownership as the user inside the container.
I assume the underlying problem is that the docker/root supervising process needs to get a hold of the fuse-mount as well when setting up the container.
Eventually it helped to mount the fuse volume with the allow_other option. Be aware that this opens access to any user. Better might be allow_root – not tested, as blocked for other reasons.
I got this error after changing my Windows password. I had to go into Docker settings and do "Reset credentials" under "Shared Drives", then restart Docker.
Make sure the folder is being shared with the docker embedded VM. This differs with the various types of docker for desktop installs. With toolbox, I believe you can find the shared folders in the VirtualBox configuration. You should also note that these directories are case sensitive. One way to debug is to try:
docker run --rm -it -v "/:/host" ubuntu /bin/bash
And see what the filesystem looks like under "/host".
I have encountered this problem on Docker (Windows) after upgrading to 2.2.0.0 (42247). The issue was with casing in the folder name that I've provided in my arguments to docker command.
Did you use this container before? You could try to remove all the docker-volumes before re-executing your command.
docker volume rm `(docker volume ls -qf dangling=true)`
I tried your command locally (MacOS) without any error.
I met this problem too.
I used to run the following command to share the folder with container
docker run ... -v c:/seleniumplus:/dev/seleniumplus ...
But it cannot work anymore.
I am using the Windows 10 as host.
My docker has recently been upgraded to "19.03.5 build 633a0e".
I did change my windows password recently.
I followed the instructions to re-share the "C" drive, and restarted the docker and even restarted the computer, but it didn't work :-(.
All of sudden, I found that the folder is "C:\SeleniumPlus" in the file explorer, so I ran
docker run ... -v C:/SeleniumPlus:/dev/seleniumplus ...
And it did work. So it is case-sensitive when we specify the windows shared folder in the latest docker ("19.03.5 build 633a0e").
I am working in Linux (WSL2 under Windows, to be more precise) and my problem was that there existed a symlink for that folder on my host:
# docker run --rm -it -v /etc/localtime:/etc/localtime ...
docker: Error response from daemon: mkdir /etc/localtime: file exists.
# ls -al /etc/localtime
lrwxrwxrwx 1 root root 25 May 23 2019 /etc/localtime -> ../usr/share/zoneinfo/UTC
It worked for me to bind mount the source /usr/share/zoneinfo/UTC instead.
I had this issue when I was working with Docker in a CryFS -encrypted directory in Ubuntu 20.04 LTS. The same probably happens in other UNIX-like OS-es.
The problem was that by default the CryFS-mounted virtual directory is not accessible by root, but Docker runs as root. The solution is to enable root access for FUSE-mounted volumes by editing /etc/fuse.conf: just comment out the use_allow_other setting in it. Then mount the encrypted directory with the command cryfs <secretdir> <opendir> -o allow_root (where <secretdir> and <opendir> are the encrypted directory and the FUSE mount point for the decrypted virtual directory, respectively).
Credits to the author of this comment on GitHub for calling my attention to the -o allow_root option.
Had the exact error. In my case, I used c instead of C when changing into my directory.
I solved this by restarting docker and rebuilding the images.
I have put the user_allow_other in /etc/fuse.conf.
Then mounting as in the example below has solved the problem.
$ sshfs -o allow_other user#remote_server:/directory/
I had this issue in WSL, likely caused by leaving some containers alive too long. None of the advice here worked for me. Finally, based on this blog post, I managed to fix it with the following commands, which wipe all the volumes completely to start fresh.
docker-compose down
docker rm -f $(docker ps -a -q)
docker volume rm $(docker volume ls -q)
docker-compose up
Then, I restarted WSL (wsl --shutdown), restarted docker desktop, and tried my command again.
In case you work with a separate Windows user, with which you share the volume (C: usually): you need to make sure it has access to the folders you are working with -- including their parents, up to your home directory.
Also make sure that EFS (Encrypting File System) is disabled for the shared folders.
See also my answer here.
I had the same issue when developing using docker. After I moved the project folder locally, Docker could not mount files that were listed with relatives paths, and tried to make directories instead.
Pruning docker volumes / images / containers did not solve the issue. A simple restart of docker-desktop did the job.
This error crept up for me because the problem was that my docker-compose file was looking for the APPDATA path on my machine on mac OS. MacOS doesn't have an APPDATA environment variable so I just created a .env file with the contents:
APPDATA=~/Library/
And my problem was solved.
I faced this error when another running container was already using folder that is being mounted in docker run command. Please check for the same & if not needed then stop the container. Best solution is to use volume by using following command -
docker volume create
then Mount this created volume if required to be used by multiple containers..
For anyone having this issue in linux based os, try to remount your remote folders which are used by docker image. This helped me in ubuntu:
sudo mount -a
I am running docker desktop(docker engine v20.10.5) on Windows 10 and faced similar error. I went ahead and removed the existing image from docker-desktop UI, deleted the folder in question(for me deleting the folder was an option because i was just doing some local testing), removed the existing container, restarted the docker and it worked
In my case my volume path (in a .env file for docker-compose) had a space in it
/Volumes/some\ thing/folder
which did work on Docker 3 but didn't after updating to Docker 4. So I had to set my env variable to :
"/Volumes/some thing/folder"
I had this problem when the directory on my host was inside a directory mounted with gocryptfs. By default even root can't see the directory mounted by gocryptfs, only the user who executed the gocryptfs command can. To fix this add user_allow_other to /etc/fuse.conf and use the -allow_other flag e.g. gocryptfs -allow_other encrypted mnt
In my specific instance, Windows couldn't tell me who owned my SSL certs (probably docker). I took control of the SSL certs again under Properties, added read permission for docker-users and my user, and it seemed to have fixed the problem. After tearing my hair out for 3 days with just the Daemon: Access Denied error, I finally got a meaningful error regarding another answer above "mkdir failed" or whataever on a mounted file (the SSL cert).
Disclaimer/Edit 2
Some years later, for everyone reading this question - If you are on Windows and want to use docker with linux containers, I highly recommend not using docker for windows at all and instead starting the entire docker environment inside a VM altogether. This Ext3 NTFS issue will break your neck on so many different levels that installing docker-machine might not even be worth the effort.
Edit:
I am using docker-machine which starts a boot2docker instance inside a Virtualbox VM with a shared folder on /c/Users from which you can mount volumes into your containers. The permissions of said volumes are the ones the question is about. The VMs are stored under /c/Users/tom/.docker/
I chose to use the docker-machine Virtualbox workflow over Hyper-V because I need VBox in my daily workflow and running Hyper-V and Virtualbox together on one system is not possible due to incompabilities between different Hypervisors.
Original question
I am currently trying to set up PHPMyAdmin in a container on windows but I can't change the permissions of the config.inc.php file.
I found: Cannot call chown inside Docker container (Docker for Windows) and thought this might be somewhat related but it appears to apply only to MongoDB.
This is my docker-compose.yml
version: "3"
services:
pma:
image: (secrect company registry)/phpmyadmin
ports:
- 9090:80
volumes:
- /c/Users/tom/projects/myproject/data/var/www/public/config.inc.php:/var/www/public/config.inc.php
now, when I docker exec -it [container] bash and change in the mounted directory, I try to run chmod on the config.inc.php but for some reason, it fails silently.
root#22a4bag43245: ls -la config.inc.php
-rw------- 1 root root 0 Aug 11 15:11 config.inc.php
root#22a4bag43245: chmod 655 config.inc.php
root#22a4bag43245: ls -la config.inc.php
-rw------- 1 root root 0 Aug 11 15:11 config.inc.php
Considering the linked answer, I thought I could just move the volume out of my Userhome but then vbox doesn't mount the folder at all.
How do I change the file permissions of /var/www/public/config.inc.php persistently?
I had the same problem of not being able to change ownership even after using chown. And as I researched, it was because of NTFS volumes being mounted inside ext filesystem. So I used another approach.
The volumes internal to docker are free from these problems. So you can mount your file on internal docker volume and then create a hard symlink to that file inside your local folder wherever you want:
sudo ln $(docker volume inspect --format '{{ .Mountpoint }}' <project_name>_<volume_name>) <absolute_path_of_destination>
This way you can have your files in desired place, inside docker and without any permission issues, and you will be able to modify the contents of file as in the normal volume mount due to hard symlink.
Here is a working implementation of this process which mounts and links a directory. In case you wanna know about the details, see possible fix section in issue.
EDIT
Steps to implement this approach:
Mount the concerned file in internal docker-volume(also known as named volumes).
Before making hardlink, make sure volumes and concerned file are present there. To ensure this, you should have run your container at least once before or if you want to automate this file creation, you can include a docker run which creates the required files and exits.
docker run --rm -itd \
-v "<Project_name>_<volume_name>:/absolute/path" \
<image> bash -c "touch /absolute/path/<my_file>"
This docker run will create volumes and required files. Here, container is my project name, by default, it is the name of the folder in which project is present and <volume_name> is the same as one which we want to use in our original container. <image> can be the same one which is already being used in your original containers.
Create a hardlink in your OS to the actual file location on your system. You can find the file location using docker volume inspect --format '{{ .Mountpoint }}' <project_name>_<volume_name>/<my_file>. Linux users can use ln in terminal and windows users can use mklink in command prompt.
In step 3 we have not used /absolute/path since the <volume_name> refers to that location already, and we just need to refer to the file.
Try one of the following:
If you can rebuild the image image: image: (secrect company registry)/docker-stretchimal-apache2-php7-pma then inside the docker file, add the following
USER root
RUN chmod 655 config.inc.php
Then you can rebuild the image and push it to the registry, and what you were doing should work. This should be your preferred solution, as you don't want to be manually changing the permissions everytime you start a new container
Try to exec using the user root explicitly
docker exec -it -u root [container] bash
We have noticed that our containers are taking up a lot of space, one of the reasons for this is the images.
We would like to move the images.
I know right now they are stored in
/var/lib/docker/graph/<id>/layer
Is there a way to move these to another location/persistent disk?
To move images to another drive or another server:
docker save image_name > image_name.tar
mv image_name.tar /somewhere/else/
Load it back into docker
docker load < image_name.tar
Reference.
Here's any easy way to move docker's data:
sudo service docker stop
sudo mv /var/lib/docker /a/new/location
sudo ln -s /a/new/location /var/lib/docker # Create a symbolic link
sudo service docker start
No need to change DOCKER_OPTS or use -g /path.
You can always mount /var/lib/docker to a different disk. Otherwise, you can start the daemon with -g /path in order to tell docker to use a different directory for storage.
Using the answer by #creack I did the following on my Ubuntu install to move my entire docker images/containers folder to a new location/disk. The great thing about doing this is that any new images that I install will then use the new disk location.
First stop the docker service:
sudo service docker stop
Then move the docker folder from the default location to your target location:
sudo mv /var/lib/docker /thenewlocation
Then edit the /etc/default/docker file, inserting/amending the following line which provides the new location as an argument for the docker service:
DOCKER_OPTS="-g /thenewlocation/docker"
Restart the docker service:
sudo service docker start
This worked 100% for me - all my images remained in tact.