I am trying to spin a container from a docker image while using the -v option to mount a local volume. My understanding is that the container designated folder should sync up with the local directory, so that any update in any side is going to happen on the other side as well.
This is not happening though, the local files are not being copied to the container, and vice versa. Is this not how it's supposed to work?
Here is the command I use to spin the container:
docker run -dit --name zeppelin -p 4444:8080 -v /home/sammy/mnt/Zeppelin_Notebook:/zeppelin/notebook apache/zeppelin:0.9.0
I just figured out that I messed up the container path. I needed /opt/ in front of /zeppelin/notebook. Classic silly mistakes.
I've been trying the whole day to accomplish a simplistic example of sharing a Windows directory to Linux container running on Windows Docker host.
Have read all the guidelines and run the following:
docker run -it --rm -p 5002:80 --name mount-test --mount type=bind,source=D:\DockerArea\PortScanner,target=/app/PortScannerWorkingDirectory barebonewebapi:latest
The origin PortScanner directory on host machine has got some text file in it. The container is created successfully.
The issue is that when I'm trying to
docker exec -it mount-test /bin/bash
and then list the mounted directory in the container PortScannerWorkingDirectory - it just shows that it's empty. Nor the C# code can read the contents of the host file in the mapped directory.
Am I missing something simple here? I feel like I got stuck and can't share files on the host Windows machine to Linux container.
After several days of dealing with the issue I've found quite apparent answer. Although I had had C and D drives already shared to Docker in Docker settings I did an experiment and re-shared both drives (there's a special button Reset Credentials for that purpose in Docker agent settings for Windows). After that the issue is resolved. So saving it here with the hope that it may help someone else since this seems to be a glitch with permissions or similar.
The issue is quite hard to diagnose - when there's an issue the Docker container just silently writes into its writable layer and no error pops up.
Go to the docker settings -> shared drives -> reset credentials.
and then click the drive and click apply button.
then execute following command as suggested by docker
docker run --rm -v c:/Users:/data alpine ls /data
Is it possible to copy files to a local machine by running a command inside of a docker container. I am aware of docker cp <containerId>:container/file/path /host/file/path However, my understanding is that this has to be run from outside of the docker container. Is there a way to do it or something similar from within?
For some context I have a python script that is run inside of a docker container with something like the following command docker run -ti -rm --net=host buildServer:5000/myProgram /myProgram.py -h. I would like to retrieve the files that are generated from this program so they can be edited. I could run the docker container in detached mode, docker cp the desired file and the shutdown the container. However, I would like to be able to abstract this away from the user.
Docker containers by design don't have any access to the host filesystem unless you provide it explicitly via volume mounts. So, in your example, you could do something like:
docker run -ti -v /tmp/data:/data -rm --net=host buildServer:5000/myProgram /myProgram.py -h
And within the container, the /data directory would be mapped to /tmp/data on your host. You could then copy files into /data to get at them on your host.
This assumes that you're running Docker on Linux. If you are using Windows or OS X there may be additional steps, since in those environments Docker is actually running on a Linux virtual machine and volume access may or may not behave as expected (I don't use those platforms so I can't comment authoritatively).
For more information:
https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume
I am new to docker containers and I and am trying to solve a problem I am facing right now.
These are my understanding based on limited knowledge.
When we create a docker container, Docker creates a local mount and use it as the root file system for the docker container.
Now, if I run any commands in the container from the host server using docker exec the docker is not using the mounted partition as the / file system for the container. I mean, it still pics up the binaries and env variables from the host server. Is there any option/alternate solution for making the docker use the original mounted directory for docker exec too ?
If I access/start the container with docker attach or docker run -i -t /bin/bash, I get the mounted directory as my / file system, which gives me an entirely independent environment from my host system. But this doesn't happen with the docker exec command.
Please help !!
You are operating under a misconception. The docker image only contains what was installed in it. This is usually a very cut down version of an operating system for efficiency reasons.
The docker container is started from an image - and that's a running version, which can change and store state - but may be discarded.
docker run starts a container from an image. You can run the same image multiple times to create completely different containers (which happen to have the same starting point for their content).
docker exec attaches to one of those containers to run a command. So you will only see the things inside it that ... were inside the image, or added post start (like log files). It has no vision of the host filesystem, and may not be the same OS - the only requirement is that it shares elements of the kernel ... although it usually has a selection of the commonly used binaries.
And when you run an image to create a container, you can specify a mount. One of the options when you do this is passing through a host filesystem, with e.g. -v /path/on/host:/path_in/container. But you don't have to, you can use data containers or use a docker volume mount instead. e.g. docker run -v /mount creates a mount point within the container, using the docker filesystem, which isn't part of the parent host. This can be used to make a data container with: docker create -v /path/to/data --name data_for_acontainer some_basic_image
And then mount volumes from that data container on a new one:
docker run -d --volumes-from data_for_acontainer some_app_image
Which will attach that data container onto the /path/to/data mount. But in neither case is the 'host' filesystem touched directly - this is the whole point of dockerising things.
I have a Docker container which is running some code and creating some HTML reports. I want these reports to be published into a specific directory on the host machine, i.e. at /usr/share/nginx/reports
The way I have gone about doing this is to mount this host directory as a data volume, i.e. docker run -v /usr/share/nginx/reports --name my-container com.containers/my-container
However, when I ssh into the host machine, and check the contents of the directory /usr/share/nginx/reports, I don't see any of the report data there.
Am I doing something wrong?
The host machine is an Ubuntu server, and the Docker container is also Ubuntu, no boot2docker weirdness going on here.
From "Managing data in containers", mounting a host folder to a container would be:
docker run -v /Users/<path>:/<container path>
(see "Use volume")
Using only -v /usr/share/nginx/reports would declare the internal container path /usr/share/nginx/reports as a volume, but would have nothing to do with the host folder.
This is one of the type of mounts available:
The answer to this question is problematic because it varies depending on your operating system and your full requirements. The answer by VonC makes some assumptions that should be addressed and is therefore only correct in some contexts. Other answers on this topic generally ignore the fact that some people are running linux, others windows, and still others are on OSX or other weird OS's.
As VonC mentioned in his answer, in a lot of cases it is possible to bind-mount a host directory straight into the container, using a -v host-path:container-path argument to the docker command (you can also use --volume for added readability or --mount for rocket-science).
One of the biggest problems (in 2020) is the use of the Windows Subsystem for Linux (WSL), where bind-mounting a host volume is fraught with error and may or may not work as expected depending on whether the path mounted is in the linux filesystem or the windows filesystem. VonC's answer was written before WSL became a big problem, but it still makes assumptions about the local filesystem being real rather than mounted into a virtual-machine of some kind.
I have found that a lot of engineers prefer to bypass this unnecessary confusion through the use of docker volumes. A docker volume can be created with the command:
docker volume create <name>
Listed with
docker volume ls
and removed with
docker volume rm <name>
You can mount this by specifying the name of the volume on the left-hand-side of the --volume argument. If your volume was called, for example, 'logs', you could use something like --volume logs:/usr/share/nginx/reports to bind it to the log dir you're interested in. You can view the contents of the directory with something like this:
docker run -it --rm --volume logs:/logs alpine ls -AlF /logs/
This should list the files in that directory. If you have a file called 'nginx.log' for example, you could view it like this:
docker run -it --rm --volume logs:/logs alpine less /logs/nginx.log
And the contents would be paged to your terminal.
You can bind this volume to multiple containers simultaneously if needed. This is useful if, for example, you're writing to your logs with one container, and paging them to a console with another.
If you want to copy the example log file from above into a tmp directory on your local filesystem you can achieve that with:
docker run -it --rm --volume logs:/logs --volume /tmp:/local_tmp alpine cp /logs/nginx.log /local_tmp/
I am using Docker toolbox on windows. I am Working on a Spring Boot Application using Docker. My application writes logs to
users/path/service.log
So when i started my application from host terminal the Log file was successfully updated.
But the same when i did on docker no file was created and neither updated.
So i changed my log file location to match with the Container's Directories
var/log/service.log
I started my container again and my file was updated again.
You can choose any location as long as it matches with the container Directory. Just bash into the container and see what suits you.
Next step is to copy log files from container to host.
So in order to copy those logs to your host. You can use one of two ways i know of-
1- use Volumes in docker
2- use following Docker command to copy file from docker container to host-:
docker cp <containerId>:/file/path/within/container /host/path/target
First, you need to create a directory where you want to share the data
mkdir -p /abc/def/
Now, you need to create a docker volume using the below command. As we see here, we are specifying device as '/abc/def/'
docker volume create --driver local \
--opt type=none \
--opt device=/abc/def/ \
--opt o=bind \
spark-volume
Now, start your container with below command..
docker run -d \
--mount type=volume,dst=/abc/def/,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/opt/spark/ \
--network host \
img:tag
Now, docker container will use /abc/def/ in local Filesystem as its storage and you will have all contents of /abc/def/ in docker container available in Local Filesystem
In your application, if you set a working directory for your php code (report path), the path must be the one on the container. Then docker will copie automaticly copy to your host directory. It wasn't docker mis-configuration, but my application that was writing to the wrong place. Weird at first, but did work in my case.