I am mounting an executable file within my container using docker-compose:
volumes:
- /usr/bin/docker:/usr/bin/docker
When I connect to my container, I can clearly see that the file has been correctly mounted.
Though, when I try to execute it, I have a weird issue:
/app # ls -l /usr/bin/ | grep docker
-rwxr-xr-x 1 root root 60586560 Mar 7 15:57 docker
/app # /usr/bin/docker ps
sh: /usr/bin/docker: not found
If you have any clue about this issue, please let me know.
Best regards.
The solution was given to me in the comments above. The problem was that my docker image was not based on ubuntu like my server, causing dependencies issues.
Related
This question already has answers here:
I lose my data when the container exits
(11 answers)
Closed 3 years ago.
I pulled Ubuntu image using docker pull.
I connect to the container using docker exec and then create a file and then exit.
Again, when I execute docker exec file is lost.
How to maintain the file in that container, I have tried dockerfile and tagging docker images, it works.
But, is there any other way to maintain the files in docker container for a longer time?
One option is to commit your changes. After you've added the file, and while the container is still running, you should run:
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
Another option, maybe you'll want to use a volume, but that depends on your logic and needs.
The best way to persist content in containers its with Docker Volumes:
╭─exadra37#exadra37-Vostro-470 ~/Developer/DevNull/stackoverflow
╰─➤ sudo docker run --rm -it -v $PWD:/data ubuntu
root#00af7ccf1d3b:/# echo "Persits data with Docker Volumes" > /data/docker-volumes.txt
root#00af7ccf1d3b:/# cat /data/docker-volumes.txt
Persits data with Docker Volumes
root#00af7ccf1d3b:/# exit
╭─exadra37#exadra37-Vostro-470 ~/Developer/DevNull/stackoverflow
╰─➤ ls -al
total 12
drwxr-xr-x 2 exadra37 exadra37 4096 Nov 25 15:34 .
drwxr-xr-x 8 exadra37 exadra37 4096 Nov 25 15:33 ..
-rw-r--r-- 1 root root 33 Nov 25 15:34 docker-volumes.txt
╭─exadra37#exadra37-Vostro-470 ~/Developer/DevNull/stackoverflow
╰─➤ cat docker-volumes.txt
Persits data with Docker Volumes
The docker command explained:
sudo docker run --rm -it -v $PWD:/data alpine
I used the flag -v to map the current dir $PWD to the /data dir inside the container
inside the container:
I wrote some content to it
I read that same content
I exited the container
On the host:
I used ls -al to confirm that the file was persisted to my computer.
I confirmed could access that same file in my computer filesystem.
If you bind-mount a non-existent file (on the host), docker will happily create a directory in its place and share it with the container. Upon "fixing" the mistake, (ie, stopping the container, replacing the directory with the file and starting the container), you'll get the following error:
docker: Error response from daemon: oci runtime error: container_linux.go:262: starting container process caused "process_linux.go:339: container init caused "rootfs_linux.go:57: mounting "[snip]" to [snip]" at "[snip]" caused "not a directory"""
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
Here are some steps to reproduce from a console:
# ensure test file does not exist
$ rm -f /test.txt
# run hello-world container with test.txt bind mount
$ docker run --name testctr -v /test.txt:/test.txt hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
<snip>
# let's say what's in /
$ ls -l /
<snip>
dr-xr-xr-x 13 root root 0 Jul 17 01:08 sys
drwxr-xr-x 2 root root 4096 Jul 22 20:54 test.txt
drwxrwxrwt 1 root root 4096 Jul 17 09:01 tmp
<snip>
# let's correct the mistake and run the container again
$ rm -rf /test.txt
$ touch /test.txt
$ docker start testctr
Error response from daemon: OCI runtime create failed: container_linux.go:348: starting
container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58:
mounting \\\"/test.txt\\\" to rootfs \\\"/var/lib/docker/overlay2/
26fd6981e919e5915c31098fc74551314c4f05ce6c5e175a8be6191e54b7f807/merged\\\" at \\\"/var/lib/
docker/overlay2/26fd6981e919e5915c31098fc74551314c4f05ce6c5e175a8be6191e54b7f807/merged/
test.txt\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory
onto a file (or vice-versa)? Check if the specified host path exists and is the expected
type
Error: failed to start containers: testctr
Note that even though we get this error when starting the existing container, creating a new one actually works.
So my question is, how do I fix this? I can see two different possibilities:
recreating the container:
somehow export command that created the container (docker run ...) into a variable (?)
delete the old container
run command generated in step 1
somehow tweak the existing container to fix the mount
this may be impossible to do via docker since apparently bind mounts are not managed by docker
PS: This question is also supposed to fix this one.
Two options for generating docker run ... for existing containers:
assaflavie/runlike - I went with this since the other seemed to have some issues with labels (but this one doesn't support bulk inspection)
$ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock assaflavie/runlike <container>
nexdrew/rekcod
$ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock nexdrew/rekcod <container>
The final script would look like (untested):
# get names of running containers (names persist recreation)
running_containers=$(docker ps --format "{{.Names}}")
# stop running containers
docker stop $running_containers
# generate recreate script
containers=$(docker ps --all --format "{{.Names}}")
echo '#!/bin/sh' > ./recreate.sh
while read -r cname; do
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock assaflavie/runlike "$cname" >> ./recreate.sh
done <<< "$containers"
chmod +x ./recreate.sh
# ... do some action now (maybe also manually check recreate script) ...
# recreate containers
docker rm --force $containers
./recreate.sh
# restart containers that were previously running
docker start $running_containers
This seems to tackle my needs, but a few people noted that these tools might miss a docker feature or might even contain bugs (I noticed this already with rekcod, for example), so use with caution.
I am trying to setup my project with docker. I am using Docker Toolbox on Windows 10 Home. I am very new to docker. To my understanding I have to copy my files to new container and add a volume so that I can persist changes made by gulp.
Here is my folder structure
-- src
|- dist
|- node-modules
|- gulpfile.js
|- package.json
|- Dockerfile
The Dockerfile code
FROM node:8.9.4-alpine
RUN npm install -g gulp
CMD [ "ls", 'source' ]
I tried many solutions for *docker run -v *
e.g
docker run -v /$(pwd):/source <container image>
docker run -v //c/Users/PcUser/Desktop/proj:/source <container image>
docker run -v //c/Users/PcUser/Desktop/proj:/source <container image>
docker run -v //d/proj:/source <container image>
docker run -v /d/proj:/source <container image>
* But No luck *
Can anyone describe how would you set it up for yourself with the same structure. And why am I not able to mount my host folder.
P.S: If I use two containers one for compiling my code with gulp and one with nginx to serve the content of dist folder. How will I do that.
#sxm1972 Thank you for your effort and help.
You are probably using Windows Pro or a server edition. I am using Windows 10 Home edition
Here is how I solved it, so other people using same setup can solve their issue.
There may be a better way to solve this, please comment if there is an efficient way.
So...
First, the question... Why I don't see my shared volume from PC in my container.
Ans: If we use docker's Boot2Docker with VirtualBox (which I am) then whenever a volume is mounted it refers to a folder inside the Boot2Docker VM
Image: Result of -v with docker in VirtualBox Boot2Docker
So with this if we try to use $ ls it will show an empty folder which in my case it did.
So we have to actually mount the folder to Boot2Docker VM if we want to share our files from Windows environment to Container.
Image: Resulting Mounts Window <-> Boot2Docker <-> Container
To achieve this we have to manually mount the folder to VM with the following command
vboxmanage sharedfolder add default --name "<folder_name_on_vm>" --hostpath "<path_to_folder_on_windows>" --automount
IF YOU GET ERROR RUNNING THE COMMAND, SAYING vboxmanager NOT FOUND ADD VIRTUAL BOX FOLDER PATH TO YOUR SYSTEM PATH. FOR ME IT WAS C:\Program Files\Oracle\VirtualBox
After running the command, you'll see <folder_name_on_vm> on root. You can check it by docker-machine ssh default and then ls /. After confirming that the folder <folder_name_on_vm> exist, you can use it as volume to your container.
docker run -it -v /<folder_name_on_vm>:/source <container> sh
Hope this helps...!
P.S If you are feeling lazy and don't wan't to mount a folder, you can place your project inside your C:/Users folder as it is mounted by default on the VM as show in the image.
The problem is because the base-image you use runs the node REPL as its ENTRYPOINT. If you run the image as docker run -it node:8.9.4-alpine you will see a node prompt and it will not run the npm command like you want.
The way I worked around the problem is to create your own base image using the following Dockerfile:
FROM node:8.9.4-alpine
CMD ["sh"]
Build it as follows:
docker built -t mynodealpine .
Then build your image using this modified Dockerfile:
FROM mynodealpine
RUN npm install -g gulp
CMD [ "/bin/sh", "-c", "ls source" ]
For the problem regarding mounting of volumes, since you are using Docker for Windows, you need to go into Settings (click on the icon in the system tray) and then go and enable Shared Drives.
Here is the output I was able to get:
PS C:\users\smallya\testnode> dir
Directory: C:\users\smallya\testnode
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 2/18/2018 11:11 AM dist
d----- 2/18/2018 11:11 AM node_modules
-a---- 2/18/2018 11:13 AM 77 Dockerfile
-a---- 2/18/2018 11:12 AM 26 gulpfile.js
-a---- 2/18/2018 11:12 AM 50 package.json
PS C:\users\smallya\testnode> docker run -it -v c:/users/smallya/testnode:/source mynodealpinenew
Dockerfile dist gulpfile.js node_modules package.json
PS C:\users\smallya\testnode>
Thanks for the question, possible more simple configuration via a VirtualBox graphical dialogue, worked for me, without the use of command line, albeit maybe not necessarily more versatile:
configuring the sharing folder inside VirtualBox shared folders configuration dialogue,
and then calling mount like this
docker run --volume //d/docker/nginx:/etc/nginx
I will be binding the /etc/nginx directory in my container to
D:\Program Files\Docker Toolbox\nginx
source:
https://medium.com/#Charles_Stover/fixing-volumes-in-docker-toolbox-4ad5ace0e572#fromHistory#fromHistory
I'm developing a symfony application with docker.
I' sharing a host volume which it's supposed to contain my project that should be run with apache.
docker run -d -ti --name web -p 80:80 -v /Users/Matteo/Documents/em3:/var/www/html/applications ubuntu /bin/bash
As a base image I've used ubuntu, on which I've installed apache and PHP7. Everything works, but when I enter into my docker:
docker exec -it web /bin/bash
root#85a23559d01b:/var/www/html/applications/auth# app/console cache:clear --env=dev
[Symfony\Component\Filesystem\Exception\IOException]
Failed to remove directory "/var/www/html/applications/auth/app/cache/de~/jms_serializer": .
This is maybe because the dir permissions?:
root#85a23559d01b:/var/www/html/applications/auth/app# ls -al | grep cache
drwxr-xr-x 1 1000 staff 374 Oct 30 21:50 cache
chmod does no change anything though:
root#85a23559d01b:/var/www/html/applications/auth/app# chmod g+w cache
root#85a23559d01b:/var/www/html/applications/auth/app# ls -al | grep cache
drwxr-xr-x 1 1000 staff 374 Oct 30 21:50 cache
I guess I'm missing something. Any help would be appreciated
As commented in symfony issue 2600
you can "easily" reproduce this if you use a Linux VirtualBox on a Windows host.
[And that might be the case here, using boot2docker from Docker Toolbox, instead of Docker for Windows and its Hyper-V]
cache:clear is never able to remove app/cache/dev_old - but that may be an issue with the shared folder system provided by VirtualBox (read about similar issues on their forums).
You have to upgrade VirtualBox Guest Additions
The OP Bertuz points out in the comments to "Changing boot2docker to use NFS for local mounts in OS X" and its file-nfs-mount-boot2docker-sh gist (and a more recent one).
Here is part of my Dockerfile :
RUN mkdir /data
RUN chown www-data:www-data /data
RUN chmod 664 /data
VOLUME ["/data"]
I create the image with the command :
docker build -t webapp .
I run it like this :
docker run -d -p 80:80 -v /home/user/data:/data webapp
But in my host user dir, the data directory is created like this :
drwxr-xr-x 2 root root 4,0K avril 28 21:52 data
And in the image (docker exec -it CONTAINER_ID bash) i have :
drwxr-xr-x 2 root root 4096 Apr 28 19:52 data
So commands are ignored from the Dockerfile.
How can a web docker app simply get permission to write on a host directory ?
So you are building an image, setting chmods, and it's all cool.
But then you run the container with -v option, which means /data will be replaced with mounted volume. At this time all files and permissions from built image are ignored. You can check this by running container without -v option. The solution is to create entrypoint script (with ENTRYPOINT or CMD command in Dockerfile) which will first fix permissions and then run original command for your image.