I am trying to do some builds from docker (yocto specifically), and I would like to be able to run it and have the output be in the Windows filesystem. However, no matter I do, utime does not work from the docker container.
(Note that all is fine if I change the output directories to be inside the docker container, instead of two a docker volume mounted to local windows filesystem)
Specifically, from tar (extract): Cannot utime: Operation not permitted (not that adding -m works around this, but I do not have control over all the places tar is called)
Everything I read about this error points to a file permissions issue, but I have tried running docker from a PowerShell as Administrator, and it does not solve the problem.
Is this just a deficiency of the Windows filesystem (NTFS) or is there a way to configure docker volumes to have the required permissions to set utime.
Related
I have window10 and SSD(e.g samsung SSD 256G)
If i created A Docker ubuntu container and access somewhere in there(e.g /home/myname)
and i created test.txt which contains "hello world", it might be in "/home/myname/test.txt"
and test.txt might have it's own size(8kb) i think it should get his room from samsung.SSD
i can access test.txt using 'docker attach' and also i know how to mount using -v option then i can change or update that file(i know it is just duplicated from Container)
But I wanna see or access test.txt file from My Window10 C-drive or window10-Desktop or using find/search function given from window10 how test.txt exists or using my samsung.SSD
sorry for lack of en, basic computing system.
the following comes from "https://docs.docker.com/storage/" it works not enough for me
By default all files created inside a container are stored on a writable container layer. This means that:
The data doesn’t persist when that container no longer exists, and it can be difficult to get the data out of the container if another process needs it.
A container’s writable layer is tightly coupled to the host machine where the container is running. You can’t easily move the data somewhere else.
Writing into a container’s writable layer requires a storage driver to manage the filesystem. The storage driver provides a union filesystem, using the Linux kernel. This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem.
Docker has two options for containers to store files in the host machine, so that the files are persisted even after the container stops: volumes, and bind mounts. If you’re running Docker on Linux you can also use a tmpfs mount. If you’re running Docker on Windows you can also use a named pipe.
Keep reading for more information about these two ways of persisting data.
Try the suggestions here:
https://stackoverflow.com/a/27320731/13064727
think this is a 2 step process, maybe you are missing the first step.
so seems you don't udnerstand how -v works
$ docker run -ti --rm -v "<your_windows_path>:/apps -w /apps ubuntu bash
root#b2fd40f5f423:/apps# echo "helloworld" > test.txt
-w /apps (WORKDIR) to make sure you create the file in container will be the same path reflected to your windows path.
from your windows system, you should be fine to search this file under local disk or SSD disk with path <your_windows_path>
So I have this remote folder /mnt/shared mounted with fuse. It is mostly available, except there shall be some disconnections from time to time.
The actual mounted folder /mnt/shared becomes available again when the re-connection happens.
The issue is that I put this folder into a docker volume to make it available to my app: /shared. When I start the container, the volume is available.
But if a disconnection happens in between, while the /mnt/shared repo on the host machine is available, the /shared folder is not accessible from the container, and I get:
user#machine:~$ docker exec -it e313ec554814 bash
root#e313ec554814:/app# ls /shared
ls: cannot access '/shared': Transport endpoint is not connected
In order to get it to work again, the only solution I found is to docker restart e313ec554814, which brings downtime to my app, hence is not an acceptable solution.
So my questions are:
Is this somehow a docker "bug" not to reconnect to the mounted folder when it is available again?
Can I execute this task manually, without having to restart the whole container?
Thanks
I would try the following solution.
If you mount the volume to your docker like so:
docker run -v /mnt/shared:/shared my-image
I would create an intermediate directory /mnt/base/shared and mount it to docker like so:
docker run -v /mnt/base/shared:/base/shared my-image
and I will also adjust my code to refer to the new path or creating a link from /base/shared to /shared inside the container
Explanation:
The problem is that the mounted directory /mnt/shared is probably deleted on host machine, when there is a disconnection and a new directory is created after connection is back. But, the container started running with directory mapping for the old directory which was deleted. By creating an intermediate directory and mapping to it instead you avoid this mapping issue.
Another solution that might work is to mount the directory using bind-propagation=shared
e.g:
--mount type=bind,source=/mnt/shared,target=/shared,bind-propagation=shared
See docker docs explaining bind-propogation
I am new to docker and containers. I have a container consisting of an MRI analysis software. Within this container are many other software the main software draws its commands from. I would like to run a single command from one of the softwares in this container using research data that is located on an external hard drive which is plugged into my local machine that is running docker.
I know there is a cp command for copying files (such as scripts) into containers and most other questions along these lines seem to recommend copying the files from your local machine into the container and then running the script (or whatever) from the container. In my case I need the container to access data from separate folders in a directory structure and copying over the entire directory is not feasible since it is quite large.
I honestly just want to know how I can run a single command inside the docker using inputs present on my local machine. I have run docker ps to get the CONTAINER_ID which is d8dbcf705ee7. Having looked into executing commands inside containers I tried the following command:
docker exec d8dbcf705ee7 /bin/bash -c "mcflirt -in /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold.nii -out sub-S06V1A_task-compound_run-01_bold_mcf_COMMAND_TEST.nii.gz -reffile /Volumes/DISS/FMRIPREP_TMP/sub-S06V1A_dof6_ver1.2.5/fmriprep_wf/single_subject_S06V1A_wf/func_preproc_task_compound_run_01_wf/bold_reference_wf/gen_ref/ref_image.nii.gz -mats -plots"
mcflirt is the command I want to run inside the container. I believe the exec command would do what I hope since if I run docker exec d8dbcf705ee7 /bin/bash -c "mcflirt" I will get help output for the mcflirt command which is the expected outcome in that case. The files inside of the /Volume/... paths are the files on my local machine I would like to access. I understand that the location of the files is the problem since I cannot tab complete the paths within this command; when I run this I get the following output:
Image Exception : #22 :: ERROR: Could not open image /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
Can anyone point me in the right direction?
So if I got you right, you need to execute some shell script and provide the context (like local files).
The way is straightforward.
Lets say your script and all needed files are located in /hello folder of your host PC (no matter really if they are stored together or not, just showing the technique).
/hello
- runme.sh
- datafile1
- datafile1
You mount this folder into your container to make the files accessible inside. If you dont need container to modify them, better mount in readonly mode.
You launch docker like this:
docker run -it -v /hello:/hello2:ro ubuntu /hello2/runme.sh
And that's it! Your script runme.sh gets executed inside container and it has access to nearby files. Thanks to -v /hello:/hello2:ro directive. It maps host's folder /hello into container's folder /hello2 in readonly ro mode.
Note you can have same names, I've just differed them to show the difference.
I am trying to run the below Docker command but am receiving a file not found error. I have verified that the local folder /D/VMs/... contains the appropriate file, and that the adam-submit command is properly functioning. I believe there is an issue with how I am mounting the local folder - I assumed that it would be at the location /data for the docker machine. For context, I am following the tutorial at http://ampcamp.berkeley.edu/5/exercises/genome-analysis-with-adam.html
using the docker image at https://hub.docker.com/r/heuermh/adam/
Docker Run:
docker run -v '/D/VMs/hs/adam/data:/data' heuermh/adam adam-submit transform '/data/NA12878.sam' '/data/NA12878.adam'
Docker Run #2:
docker run -v //d/vms/hs/adam/data:/data heuermh/adam adam-submit transform /data/NA12878.sam /data/NA12878.adam
Error:
Exception in thread "main" java.io.FileNotFoundException: Couldn't find any files matching /data/NA12878.sam. If you are trying to glob a directory of Parquet files, you need to glob inside the directory as well (e.g., "glob.me.*.adam/*", instead of "glob.me.*.adam"
From the directories you listed, it looks like you're running Docker for Windows. This runs inside of a VM, and folders mapped into a container are mapped from that VM. To map a folder from the parent OS, it needs to first be shared to the VM which is enabled by default on C:/Users.
If you're using docker-machine, check the settings of your VirtualBox, otherwise, check the settings of Docker itself for sharing folders and make sure /D/VMs is included.
I'm new to Docker, I come from Vagrant.
I'm using Docker (1.9.1) inside my "D:/Works/something/DockerFirstTime" folder.
Now I create the machine with
docker-machine create first
and simple Dockerfile:
FROM ruby:2.2-onbuild
and simple Gemfile:
source 'https://rubygems.org'
gem 'rails'
Now with this command I want to use a shared folder like in Vagrant in the same hard drive of my Dockerfile:
docker run -it -v //d/Works/something/DockerFirstTime:/usr/src/app -w /usr/src/app ruby:2.2 bundle install
But it doesn't works.
How to do this?
I know that Docker only shares the /c/User/folder, is that right?
How can I use the folder with the files and modify my files with editor in Windows and then restart server like in a normal shell on a single PC or like in Vagrant?
This question and this question have a similar root problem, mounting a non C:/ drive folder in boot2docker. I wrote an in-depth answer to the other question that provide the same information that is in the first half of #VonC's answer.
From Docker Docs:
All other paths come from your virtual machine’s filesystem. [...] In
the case of VirtualBox you need to make the host folder available as a
shared folder in VirtualBox. Then, you can mount it using the Docker
-v flag.
To get your folder mounted in a container:
This mounts your entire D:\ drive, you can simply change the file paths to be more granular and specific.
Share the directory with VBox:
This only needs to be done once.
In windows CMD:
VBoxManage sharedfolder add "boot2docker-vm" --name "d-share" --hostpath "D:\"
Mount the shared directory in your VM:
This will need to be done each time you restart the VM.
In the Boot2Docker VM terminal:
mount -t vboxsf -o uid=1000,gid=50 d-share /d
To see sources and explanation for how this works see my full answer to the other similar question
After this you can use the -v/--volume flag in Docker to mount this folder or any sub-folders or files into containers. If you mounted your whole D:\ drive you can use that exact docker run command from your question and it should now work. If you mounted a specific part of you drive you will have to change the paths to match.
To edit in windows, run in docker:
Also from Docker Docs:
Mounting a host directory can be useful for testing. For example, you
can mount source code inside a container. Then, change the source code
and see its effect on the application in real time.
As a VBox shared directory you should be able to see changes made from the Windows side reflected in the boot2docker vm.
You may need to restart containers to see the changes actually appear, this depends on how the program running inside the container, in your case ruby, uses the files. If the files are compiled into an app when the container starts, for example, you will definitely need to restart the container to see the changes.
Note:
Beware the CR LF vs. LF line ending difference when writing files in Windows and reading them in Linux. Make sure your text editor is saving files with Unix line endings or else you may start to see errors caused by '^M' appended to the end of all your lines.
I know that Docker only shares the /c/User/folder, is that right?
It does, and it is able to do so because the VirtualBox VM used for providing a Linux host for docker is sharing C:\Users.
For docker to see another folder, you would need to:
use VBoxmanage sharedfolder add "VM name" --name "sharename" --hostpath "D:\Works"
then mount /D/Works within a VM session, as mentioned in "share windows folder (other than c/Users/) with docker container (using docker windows client)", and mentioned in boot2docker:
mount -t vboxsf -o uid=1000,gid=50 sharename /some/mount/location
The issue with that last alternative is described in "
Introduction to boot2docker" (scroll down to the "Shared folders" section)
The main issue with vboxsf is that it does not do any sort of caching sort of caching so when you are attempting to share a large amount of small files (big git repo’s) or anything that is filesystem read heavy (grunt) performance becomes a factor.
The best solution I have come up with so far is using vagrant with a customized version of boot2docker with NFS support enabled, which has very little “hacking” to get working which is nice.
And a good enough selling point for me is the speed increase by using NFS instead of vboxsf, it’s pretty staggering actually.
This is the project that I have been using https://vagrantcloud.com/yungsang/boxes/boot2docker.
The magic sauce in the volume sharing is in this line.
config.vm.synced_folder ".", "/vagrant", type: "nfs"
Which tells Vagrant to share your current directory in to the boot2docker VM in the /vagrant directory, using NFS.
However, that project seems quite old and would need to be adapted in order to include the latest boot2docker.iso (docker 1.9.1).