Similar question: mac image path
In mac, when I run docker inspect containerID
I see most of the stuff is coming from /var/lib/docker/
however, this path neither exists in the host (mac) nor the docker container.
where is this path refer to?
you can find your files in container:path and use the docker commands to copy them to your local machine and vice versa (I'm assuming you are trying to move files e.g. from your local machine to your container). I had the same exact issue you mentioned but I manage to move files with
docker cp local_path containerID:target_path
to see your container_ID simply run docker ps -a, it should show it even if unmounted.
See https://docs.docker.com/engine/reference/commandline/cp/
Related
I have docker desktop on C drive also as WSL. I started ubuntu terminal on F drive in specific folder by making it starting location. After executing docker run -d -p 80:80 docker/getting-started it says me ```unable to find image <image_name> and starts container.
After that when container is created I can see it in docker.
It also creates image but the problem is I can't find it and where image and container are stored.
How can I find files of docker and so on and create, run container with its image on F drive in wsl folder (in this example)?
well i can't understand your problem well, but here an answer for your question
It also creates image but the problem is I can't find it and where image and container are stored.
to find images downloaded locally
run
docker images
to find the metadata which include path of stored image
docker inspect <image_name>
I want to sync my local folder with that of a docker container. I am using a windows system with Wsl 2 backend. I tried running the following command as per the instructions of a docker course instructor but it didn't seem to have synced it:
docker run -v ${pwd}:\app:ro --env-file ./.env -d -p 3000:4000 --name node-app node-app-image
I faced a similar issue when I started syncing local folders with that of a docker container in my windows system. The solution was actually quite simple, instead of using -v ${pwd}:\app:ro in your first volume it should be -v ${pwd}:/app:ro. Notice the / instead of \. Since your docker container is a Linux container the path should have /.
As #Sysix pointed out, docker will always overwrite the folder in the container with the one on the host (no matter if it already existed or not). Only those files will be in that folder/volume that were created either on the host, or in the container during runtime.
Learn more about bind mounts and volumes here.
I'm running a container by sending to docker daemon so it can run a sibling container and in that container I try to run another container and mount a volume to access some data, however in the sibling container, the volume is either empty or the file is converted to a folder...
Running the first container:
$ docker run -v /var/run/docker.sock:/var/run/docker.sock -it example /bin/bash
root#3aa35965846a:/home/node/example# ls some_volume/
test.txt
root#3aa35965846a:/home/node/example# cat some_volume/test.txt
hello
// Running the second container
root#3aa35965846a:/home/node/example# docker run -v /home/node/example/some_volume/:/some_volume/ -it node:10 /bin/bash
root#6a84739fbb92:/# ls /some_volume/
* test.txt
root#6a84739fbb92:/# cat /some_volume/test.txt/
cat: /some_volume/test.txt/: Is a directory
The first time I run the second container the volume is empty, if I try to mount a file directly it is converted to a folder, and after that if I try to mount the folder like the example above, there is only the file I tried to mount earlier and it is a folder.
How is this possible ? If i try to mount a volume outside the first container I don't have any problem, how can I fix this ?
The first path in the docker run -v option is always on the host system. For example, if you
docker run -v /etc:/x busybox cat /x/shadow
it will dump out the host's encrypted password file, regardless of whether you ran this command directly from the host or from a container.
There isn't a way to share an arbitrary directory from one container to another. If the launching container knows something about its own directory structure (in particular that some directory was mounted from a specific host path or named volume) then it can replicate that to the other container, but that's not a generic answer. The other behaviors you're seeing are just a consequence of those directories not existing on the host system.
In general I would advise not using Docker for short-lived processes that principally interact with the outside world through the filesystem. Take whatever program you'd run in the other container, install it in your image's Dockerfile, and run it directly without going through Docker.
If you really can't avoid this workflow, the only thing I've found to work reliably is to docker create the container, docker cp files in, docker start it, and docker wait for it to finish. When it's done, docker cp the result out before docker rm it. That's a kind of painstaking workflow but it gets around the problem of the two containers not sharing any filesystem space.
I'm using the old version of docker(1.9), sometimes will meet a bug(deadlock), I cann't run any command of docker. like docker ps , docker insepct...
The container is still running, can I export the data in container? or where is the data stored in the host machine?
it depends if your data is in a volume or in the container.
See the doc for docker export
https://docs.docker.com/engine/reference/commandline/export/
extract the docker export command does not export the contents of volumes associated with the container.
if you have volumes, see
https://docs.docker.com/storage/volumes/#backup-restore-or-migrate-data-volumes
the command docker cp
https://docs.docker.com/engine/reference/commandline/cp/
should copy whatever you want from a container to somewhere on the host, but this will be useless for you if most docker xxx commands hang
The data is usually stored in /var/lib/docker, but that can change depending upon your docker version and whether you run on Linux, Mac, or Windows
I am new to docker containers and I and am trying to solve a problem I am facing right now.
These are my understanding based on limited knowledge.
When we create a docker container, Docker creates a local mount and use it as the root file system for the docker container.
Now, if I run any commands in the container from the host server using docker exec the docker is not using the mounted partition as the / file system for the container. I mean, it still pics up the binaries and env variables from the host server. Is there any option/alternate solution for making the docker use the original mounted directory for docker exec too ?
If I access/start the container with docker attach or docker run -i -t /bin/bash, I get the mounted directory as my / file system, which gives me an entirely independent environment from my host system. But this doesn't happen with the docker exec command.
Please help !!
You are operating under a misconception. The docker image only contains what was installed in it. This is usually a very cut down version of an operating system for efficiency reasons.
The docker container is started from an image - and that's a running version, which can change and store state - but may be discarded.
docker run starts a container from an image. You can run the same image multiple times to create completely different containers (which happen to have the same starting point for their content).
docker exec attaches to one of those containers to run a command. So you will only see the things inside it that ... were inside the image, or added post start (like log files). It has no vision of the host filesystem, and may not be the same OS - the only requirement is that it shares elements of the kernel ... although it usually has a selection of the commonly used binaries.
And when you run an image to create a container, you can specify a mount. One of the options when you do this is passing through a host filesystem, with e.g. -v /path/on/host:/path_in/container. But you don't have to, you can use data containers or use a docker volume mount instead. e.g. docker run -v /mount creates a mount point within the container, using the docker filesystem, which isn't part of the parent host. This can be used to make a data container with: docker create -v /path/to/data --name data_for_acontainer some_basic_image
And then mount volumes from that data container on a new one:
docker run -d --volumes-from data_for_acontainer some_app_image
Which will attach that data container onto the /path/to/data mount. But in neither case is the 'host' filesystem touched directly - this is the whole point of dockerising things.