I'm developing a symfony application with docker.
I' sharing a host volume which it's supposed to contain my project that should be run with apache.
docker run -d -ti --name web -p 80:80 -v /Users/Matteo/Documents/em3:/var/www/html/applications ubuntu /bin/bash
As a base image I've used ubuntu, on which I've installed apache and PHP7. Everything works, but when I enter into my docker:
docker exec -it web /bin/bash
root#85a23559d01b:/var/www/html/applications/auth# app/console cache:clear --env=dev
[Symfony\Component\Filesystem\Exception\IOException]
Failed to remove directory "/var/www/html/applications/auth/app/cache/de~/jms_serializer": .
This is maybe because the dir permissions?:
root#85a23559d01b:/var/www/html/applications/auth/app# ls -al | grep cache
drwxr-xr-x 1 1000 staff 374 Oct 30 21:50 cache
chmod does no change anything though:
root#85a23559d01b:/var/www/html/applications/auth/app# chmod g+w cache
root#85a23559d01b:/var/www/html/applications/auth/app# ls -al | grep cache
drwxr-xr-x 1 1000 staff 374 Oct 30 21:50 cache
I guess I'm missing something. Any help would be appreciated
As commented in symfony issue 2600
you can "easily" reproduce this if you use a Linux VirtualBox on a Windows host.
[And that might be the case here, using boot2docker from Docker Toolbox, instead of Docker for Windows and its Hyper-V]
cache:clear is never able to remove app/cache/dev_old - but that may be an issue with the shared folder system provided by VirtualBox (read about similar issues on their forums).
You have to upgrade VirtualBox Guest Additions
The OP Bertuz points out in the comments to "Changing boot2docker to use NFS for local mounts in OS X" and its file-nfs-mount-boot2docker-sh gist (and a more recent one).
Related
I am mounting an executable file within my container using docker-compose:
volumes:
- /usr/bin/docker:/usr/bin/docker
When I connect to my container, I can clearly see that the file has been correctly mounted.
Though, when I try to execute it, I have a weird issue:
/app # ls -l /usr/bin/ | grep docker
-rwxr-xr-x 1 root root 60586560 Mar 7 15:57 docker
/app # /usr/bin/docker ps
sh: /usr/bin/docker: not found
If you have any clue about this issue, please let me know.
Best regards.
The solution was given to me in the comments above. The problem was that my docker image was not based on ubuntu like my server, causing dependencies issues.
So far I have not had this problem. I am running a container image in a remote workstation. Different than normal, this workstation is not connected to the internet and I had to initiate the docker deamon manually. (for reference)
After this to run the container I tried to do
docker run -it -t --rm --gpus all --env CUDA_VISIBLE_DEVICES -v "/mnt/disco/projects/ThisProject/:/home/ThisProject/" -w "/
home/ThisProject/" container_image:latest /bin/bash
When I do this I got into the container on folder /home/ThisProject with root user but I cannot ls here. I do cd .. and ls -l and I can see that the ThisProject folder has this
drwxrws--- 7 nobody nogroup 4096 Jul 21 07:30 ThisProject
As you can see the owner is "nobody"
What can I do to correct this?
I want to use a custom docker config.json file like this to reassign the detach keystrokes:
{
"detachKeys": "ctrl-q,ctrl-q"
}
In a "normal" docker world, i.e. one where docker is installed via apt or similar and not snap, I could put this file in $HOME/.docker/config.json and the setting is picked up when I next run the docker command. However, this file is not recognized when running /snap/bin/docker. docker just silently ignores it.
If I try to force it to use this directory, I am denied:
$ docker --config .docker/ run -it ubuntu /bin/bash
WARNING: Error loading config file: .docker/config.json: open .docker/config.json: permission denied
If I try to locate the file alongside daemon.json in /var/snap/docker/current/config/ this also silently fails to notice any config.json:
$ ls -l /var/snap/docker/current/config/
total 8
-rw-r--r-- 1 root root 36 Feb 28 11:28 config.json
-rw-r--r-- 1 root root 200 Feb 28 09:44 daemon.json
$ docker run -it ubuntu /bin/bash
Now, I can force the directory location, but surely there is a better way?
$ docker --config /var/snap/docker/current/config/ run -it ubuntu /bin/bash
Ok, after writing this question, I ran across the answer. Snap wants this file to go here:
$ ls -l ~/snap/docker/current/.docker/
total 4
-rw-r--r-- 1 gclaybur gclaybur 36 Feb 28 12:04 config.json
This question already has answers here:
I lose my data when the container exits
(11 answers)
Closed 3 years ago.
I pulled Ubuntu image using docker pull.
I connect to the container using docker exec and then create a file and then exit.
Again, when I execute docker exec file is lost.
How to maintain the file in that container, I have tried dockerfile and tagging docker images, it works.
But, is there any other way to maintain the files in docker container for a longer time?
One option is to commit your changes. After you've added the file, and while the container is still running, you should run:
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
Another option, maybe you'll want to use a volume, but that depends on your logic and needs.
The best way to persist content in containers its with Docker Volumes:
╭─exadra37#exadra37-Vostro-470 ~/Developer/DevNull/stackoverflow
╰─➤ sudo docker run --rm -it -v $PWD:/data ubuntu
root#00af7ccf1d3b:/# echo "Persits data with Docker Volumes" > /data/docker-volumes.txt
root#00af7ccf1d3b:/# cat /data/docker-volumes.txt
Persits data with Docker Volumes
root#00af7ccf1d3b:/# exit
╭─exadra37#exadra37-Vostro-470 ~/Developer/DevNull/stackoverflow
╰─➤ ls -al
total 12
drwxr-xr-x 2 exadra37 exadra37 4096 Nov 25 15:34 .
drwxr-xr-x 8 exadra37 exadra37 4096 Nov 25 15:33 ..
-rw-r--r-- 1 root root 33 Nov 25 15:34 docker-volumes.txt
╭─exadra37#exadra37-Vostro-470 ~/Developer/DevNull/stackoverflow
╰─➤ cat docker-volumes.txt
Persits data with Docker Volumes
The docker command explained:
sudo docker run --rm -it -v $PWD:/data alpine
I used the flag -v to map the current dir $PWD to the /data dir inside the container
inside the container:
I wrote some content to it
I read that same content
I exited the container
On the host:
I used ls -al to confirm that the file was persisted to my computer.
I confirmed could access that same file in my computer filesystem.
I have make my code ready on Windows, but I find it's not easy to share to boot2docker.
I also find that boot2docker can't persistent my changes. For example, I create a folder, /temp, after I restart boot2docker. This folder disappears, and it's very inconvenient.
What is your way when you have some code on Windows, but you need to dockerize them?
---update---
I try to update the setting in VirtualBox and restart boot2docker, but it's not working on my machine.
docker#boot2docker:/$ ls -al /c
total 4
drwxr-xr-x 3 root root 60 Jun 17 05:42 ./
drwxrwxr-x 17 root root 400 Jun 17 05:42 ../
dr-xr-xr-x 1 docker staff 4096 Jun 16 09:47 Users/
Boot2Docker is a small Linux VM running on VirtualBox. So before you can use your files (from Windows) in Docker (which is running in this VM), you must first share your code with the Boot2Docker VM itself.
To do so, you mount your Windows folder to the VM when it is shutdown (here a VM name of default is assumed):
C:/Program Files/Oracle/VirtualBox/VBoxManage sharedfolder \
add default -name win_share -hostpath c:/work
(Alternatively you can also open the VirtualBox UI and mount the folder to your VM just as you did in your screenshot!)
Now ssh into the Boot2Docker VM for the Docker Quickstart Terminal:
docker-machine ssh default
Then perform the mount:
Make a folder inside the VM: sudo mkdir /VM_share
Mount the Windows folder to it: sudo mount -t vboxsf win_share /VM_share
After that, you can access C:/work inside your Boot2Docker VM:
cd /VM_share
Now that your code is present inside your VM, you can use it with Docker, either by mounting it as a volume to the container:
docker-machine ssh default
docker run --volume /VM_share:/folder/in/container some/image
Or by using it while building your Docker image:
...
ADD /my_windows_folder /folder
...
See this answer.
I have Windows 10 Home edition with Docker toolbox 1.12.2 and VirtualBox 5.1.6.
I was able to mount a folder under C:\Users successfully in my container without doing any extra steps such as docker-machine ssh default.
Example:
docker run -it --rm -v /c/Users/antonyj/Documents/code:/mnt ubuntu /bin/bash
So having your files under C:\Users probably is the simplest thing to do.
If you do not want to have your files under C:\Users, then you have to follow the steps in the accepted answer.
Using Docker Toolbox, the shared directory can only be /c/User:
Invalid directory. Volume directories must be under your Users directory
Enter image description here
Step1&Step2 Command in the "Docker Quickstart Terminal" in the implementation of Step1 & Step2 can be:
# Step 1. VirtualBox. Add the error in the command line, in the VirtualBox image interface manually add, as shown above.
"C:/Program Files/Oracle/VirtualBox/VBoxManage.exe" sharedfolder add default --name "E_DRIVE" --hostpath "e:\\" --automount
# Try 1. Only a temporary effect. Restart VM after sharing failure.
#docker-machine ssh default "sudo mkdir -p /e" # Create a directory identifier, consistent with the Windows drive letter
#docker-machine ssh default "sudo mount -t vboxsf -o uid=1000,gid=50 E_DRIVE /e"
# Try 2. Modify /etc/fstab. Do not use the permanent mount. Each restart /etc/fstab content will be reset
#docker-machine ssh default "sudo sed -i '$ a\E_DRIVE /e vboxsf uid=1000,gid=50 0 0' /etc/fstab"
# Step 2. `C:\Program Files\Docker Toolbox\start.sh` https://github.com/docker/machine/issues/1814#issuecomment-239957893
docker-machine ssh default "cat <<EOF | sudo tee /var/lib/boot2docker/bootlocal.sh && sudo chmod u+x /var/lib/boot2docker/bootlocal.sh
#!/bin/sh
mkdir -p /e
mount -t vboxsf -o uid=1000,gid=50 E_DRIVE /e
EOF
"
Then restart the VM. Try this: docker run --name php-fpm --rm -it -v /e:/var/www/html php:7.1.4-fpm /bin/bash
References:
What's the best way to share files from Windows to Boot2docker VM?
http://hessian.cn/p/1502.html
Windows + Boot2Docker, How to add D:\ drive to be accessible from within docker?
In the System Tray, you should have the cute Docker whale swimming. Right click and select Settings.
Click on Apply. This will bring up the Credentials dialog and you will need to provide your current Windows credentials. Ensure that you give it correctly. I also suspect that you might need to be an administrator.
To mount our host directory (C:\data) in a container, we are going to use the -v (volume) flag while running the container. A sample run is shown here:
I have CentOS in my local Docker container.
docker run -v c:/data:/data **centos** ls /data
Mount shared folder Windows guest with Linux host (vm name 'default'):
Shutdown 'default' VM:
cd "C:\Program Files\Oracle\VirtualBox"
VBoxManage controlvm default poweroff
Add shared folder command line:
./VBoxManage sharedfolder add default -name win_share -hostpath "C:\docker\volumes"
Start VM (headless only command line interface):
/VBoxManage startvm headless default
Connect to ssh:
docker-machine ssh default
Create VM sharedfolder directory:
sudo mkdir /sharedcontent
Mount Windows folder to host VM:
sudo mount -t vboxsf win_share /sharedcontent