Empty host volume in docker container - docker

I am using Docker for Windows, and wanted to mount a host directory with files I would want to use in RStudio in a container for a bioconductor image. To mount the host directory I have used
docker run -d -v //c/Users/myR:/home/rstudio/myR2 -e PASSWORD=password -p 8787:8787 bioconductor/bioconductor_docker:devel
When I open the RStudio interface in the web browser I can see the directory myR2 is created, but it is empty. I have read that I should first share the host directory from Settings > Share folders, but i do not see this option in the Docker version I use (4.5.1). Any help? Thanks!

Related

Docker Windows Bind Mount not Copying and Refreshing Data

I'm using Docker on Windows. Versions are engine: 20.10.14, desktop: 4.7.0. In my current director, I have a DockerFile (unimportant for now) and an index.html file.
I created an nginx docker container with a bind mount to copy these files into the container: docker container run -d --name nginx_cust -p 80:80 -v %cd%:/usr/share/nginx/html nginx.
When I access localhost:80, I don't see my index.html file reflected, and when I enter the running container with bash docker container exec -it nginx_cust bash and check the mounted directory, it's empty:
Inspecting the container, I see that the bind mount does look correct,
and I don't see anything in the container log about this. Any ideas why this is not working?
After a lot of playing around, I realized that this got fixed when I moved the input files to different directory - one that was less-deeply nested. I strongly suspect there was some long filename constraint being violated silently.

Mounting host directory to container

I am launching a container for my application. But my app needs few config files to login. Files are stored in host directory. How can I mount the host filepath to container?
host directory : /opt/myApp/config
Docker command used currently :
sudo docker run container -d --name myApp-container -p 8090:8080 myApp-image
Please suggest the changes in docker command to achieve this.
You need to use -v/--volume key in such way:
-v <host dir>:<container dir>:ro
In your case it will be:
-v /opt/myApp/config:/opt/myApp/config:ro
You can use this key multiple times. You can also drop :ro part if you want directory to be writable.
See Docker documentation on volumes.

How to access /var/jenkins_home in docker?

I'm transitioning my current Jenkins server to implement Docker. Following the guide on github https://github.com/jenkinsci/docker, I was able to successfully launch jenkins with the command:
docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
I'm not sure how to view/access the data in my container/volume through file explorer. Is it only accessible through docker inspect? The guide in GitHub says I should avoid using a bind mount from a folder on the host machine into /var/jenkins/home. Is there another way to view and access my jenkins jobs?
As you can see in the Jenkins CI Dockerfile source code
/var/jenkins_home is declared as a VOLUME.
It means that it can be mounted on the host.
Your command mounts a docker volume to it but you could also mount a path on your host.
For example:
docker run -p 8080:8080 -p 50000:50000 -v ~/jenkins_home:/var/jenkins_home jenkins/jenkins:lts
On Windows hosts, you might have to create the directory first.
You can change ~/jenkins_home to whatever suites your host environment but that is a folder that you can easily navigate and inspect.
You can also still use the web interface available on the porta that you map on the host.
If you want see the data on a local host file system you can use bind mounts instead of volume, it will sync all the data from the jenkins_home folder to your local host file system. For example:
docker run -p 8080:8080 \ --name jenkins \ --mount type=bind,source="$(pwd)"/jenkins_home,target=/var/jenkins_home \ jenkins/jenkins
for more clarification on bind mounts and volumes please follow this link.
https://docs.docker.com/storage/bind-mounts/

Docker for Mac: Using Persistent Storage

I have recently discovered Docker for Mac (version 1.13.1). I am trying to work out how to use persistent storage.
What is the correct syntax for creating and using persistent storage?
I would suggest you read Manage data in containers.
One option is to mount a host directory:
docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This would mount the directory /src/webapp located on your host to directory /webapp in the container.

Docker External File Access Not in /Users/ on OSX

So, despite Docker 1.3 now allowing easy access to external storage on OSX through boot2docker for files in /Users/, I still need to access files not in /Users/. I have a settings file in /etc/settings/ that I'd like my container to have access to. Also, the CMD in my container writes logs to /var/log in the container, which I'd rather have it write to /var/log on the host. I've been playing around with VOLUME and passing stuff in with -v at run, but I'm not getting anywhere. Googling hasn't been much help. Can someone who has this working provide help?
As boot2docker now includes VirtualBox Guest Additions, you can now share folders on the host computer (OSX) with guest operating systems (boot2docker-vm). /Users/ is automatically mounted but you can mount/share custom folders. In your host console (OSX) :
$ vboxmanage sharedfolder add "boot2docker-vm" --name settings-share --hostpath /etc/settings --automount
Start boot2docker and ssh into it ($boot2docker up / $boot2docker ssh).
Choose where you want to mount the "settings-share" (/etc/settings) in the boot2docker VM :
$ sudo mkdir /settings-share-on-guest
$ sudo mount -t vboxsf settings-share /settings-share-on-guest
According that /settings is the volume declared in the docker container add -v /settings-share-on-guest:/settings to the docker run command to mount the host directory settings-share-on-guest as a data volume.
Works on Windows, not tested on OSX but should work.

Resources