I am trying to run the below Docker command but am receiving a file not found error. I have verified that the local folder /D/VMs/... contains the appropriate file, and that the adam-submit command is properly functioning. I believe there is an issue with how I am mounting the local folder - I assumed that it would be at the location /data for the docker machine. For context, I am following the tutorial at http://ampcamp.berkeley.edu/5/exercises/genome-analysis-with-adam.html
using the docker image at https://hub.docker.com/r/heuermh/adam/
Docker Run:
docker run -v '/D/VMs/hs/adam/data:/data' heuermh/adam adam-submit transform '/data/NA12878.sam' '/data/NA12878.adam'
Docker Run #2:
docker run -v //d/vms/hs/adam/data:/data heuermh/adam adam-submit transform /data/NA12878.sam /data/NA12878.adam
Error:
Exception in thread "main" java.io.FileNotFoundException: Couldn't find any files matching /data/NA12878.sam. If you are trying to glob a directory of Parquet files, you need to glob inside the directory as well (e.g., "glob.me.*.adam/*", instead of "glob.me.*.adam"
From the directories you listed, it looks like you're running Docker for Windows. This runs inside of a VM, and folders mapped into a container are mapped from that VM. To map a folder from the parent OS, it needs to first be shared to the VM which is enabled by default on C:/Users.
If you're using docker-machine, check the settings of your VirtualBox, otherwise, check the settings of Docker itself for sharing folders and make sure /D/VMs is included.
Related
I have a Docker container (Linux container running on Windows with VLS 2) running a .NET Core 5.0 application, whose Dockerfile and docker-compose.yml were created by someone else. I spun it up with docker run and passing a single environment variable and port mapping. It works just fine until it attempts to create a file, which it attempts to do with a statement like this: System.IO.File.WriteAllText($"/output_json/myfile.json", jsonString);, and errors out. The error message says
Could not find a part of the path '/output_json/myfile.json'.
Since a Docker container is essentially a virtualized filesystem, I assume I need to allocate some space to the container, or share a folder on the host machine with it, so that it has an accessible location to save the file. Is that correct?
EDIT: I've just found this in docker-compose.yml:
services:
<servicename>:
volumes:
- ./output:/output_json
Doesn't this mean that an output_json directory is supposed to be created? Does docker-compose not have any bearing on a container created with docker run?
The path /output_json probably doesn't exist in the docker image. That could be because you're meant to map a directory on your host to that path. Then the container can put it's output there and you can grab it after the container is done.
To try it, you can make an empty directory and map that to the /output_json path in your container by running the following 2 commands from a command line
mkdir %temp%\container_output
docker run -v %temp%\container_output:/output_json <other options> <image name>
Then do cd %temp%\container_output and see what output the container has made.
I am trying to do volume mapping from linux container to my local windows machine.
Here are the steps:
Installed latest version of Docker desktop for windows (2.4.0.0) and it's currently using WSL2 based engine.
Started a container using my own image on top of alpine image. Working directory is set to '/app' in linux container.
An output file (Report.html) is created under a folder (Reports) in my linux container once it is run. Able to view the file in container.
I would like to save the output file to a folder named 'Output' under my user directory in local windows machine.
Ran the following command in Power Shell in Admin mode:
docker run -it -v ~/Output:/app/Reports <imagename>
Issue:
Output file (Report.html) does not get copied to Output folder in local machine.
Note:
I don't see the option to select drive for file sharing in Docker settings.
Please guide me on how I can resolve this ?
Using absolute path in place of ~ worked, i.e. docker run -v C:/Users/12345/Output:/app/Reports
I am new to docker and containers. I have a container consisting of an MRI analysis software. Within this container are many other software the main software draws its commands from. I would like to run a single command from one of the softwares in this container using research data that is located on an external hard drive which is plugged into my local machine that is running docker.
I know there is a cp command for copying files (such as scripts) into containers and most other questions along these lines seem to recommend copying the files from your local machine into the container and then running the script (or whatever) from the container. In my case I need the container to access data from separate folders in a directory structure and copying over the entire directory is not feasible since it is quite large.
I honestly just want to know how I can run a single command inside the docker using inputs present on my local machine. I have run docker ps to get the CONTAINER_ID which is d8dbcf705ee7. Having looked into executing commands inside containers I tried the following command:
docker exec d8dbcf705ee7 /bin/bash -c "mcflirt -in /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold.nii -out sub-S06V1A_task-compound_run-01_bold_mcf_COMMAND_TEST.nii.gz -reffile /Volumes/DISS/FMRIPREP_TMP/sub-S06V1A_dof6_ver1.2.5/fmriprep_wf/single_subject_S06V1A_wf/func_preproc_task_compound_run_01_wf/bold_reference_wf/gen_ref/ref_image.nii.gz -mats -plots"
mcflirt is the command I want to run inside the container. I believe the exec command would do what I hope since if I run docker exec d8dbcf705ee7 /bin/bash -c "mcflirt" I will get help output for the mcflirt command which is the expected outcome in that case. The files inside of the /Volume/... paths are the files on my local machine I would like to access. I understand that the location of the files is the problem since I cannot tab complete the paths within this command; when I run this I get the following output:
Image Exception : #22 :: ERROR: Could not open image /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
Can anyone point me in the right direction?
So if I got you right, you need to execute some shell script and provide the context (like local files).
The way is straightforward.
Lets say your script and all needed files are located in /hello folder of your host PC (no matter really if they are stored together or not, just showing the technique).
/hello
- runme.sh
- datafile1
- datafile1
You mount this folder into your container to make the files accessible inside. If you dont need container to modify them, better mount in readonly mode.
You launch docker like this:
docker run -it -v /hello:/hello2:ro ubuntu /hello2/runme.sh
And that's it! Your script runme.sh gets executed inside container and it has access to nearby files. Thanks to -v /hello:/hello2:ro directive. It maps host's folder /hello into container's folder /hello2 in readonly ro mode.
Note you can have same names, I've just differed them to show the difference.
I have a docker container I use to compile a project, build a setup and export it out of the container. For this I mount the checked out sources (using $(Build.SourcesDirectory):C:/git/ in the volumes section of the TFS docker run task) and an output folder in 2 different folders. Now my project contains a submodule which is also correctly checked out, all the files are there. However, when my script executes nmake I get the following error:
Cannot find file: \ContainerMappedDirectories\347DEF6A-D43B-48C0-A5DF-CE228E5A10FD\src\Submodule\Submodule.pro
Where the path of the mapped container maps to C:/git/ inside the windows docker container(running on a windows host). I was able to start the docker container with an interactive powershell and mount the folder and find out the following things:
All the files are there in the container.
When doing docker cp project/ container:C:/test/ and running my build script it finds all the files and compiles successfully.
when copying the mounted project within docker with powershell and starting the build script it also works.
So it seems nmake has trouble traversing a mounted container within docker. Any idea how to fix this? I'd rather avoid copying the project into the container because that takes quite some time as compared to simply mounting the checked out project.
I am trying to do some builds from docker (yocto specifically), and I would like to be able to run it and have the output be in the Windows filesystem. However, no matter I do, utime does not work from the docker container.
(Note that all is fine if I change the output directories to be inside the docker container, instead of two a docker volume mounted to local windows filesystem)
Specifically, from tar (extract): Cannot utime: Operation not permitted (not that adding -m works around this, but I do not have control over all the places tar is called)
Everything I read about this error points to a file permissions issue, but I have tried running docker from a PowerShell as Administrator, and it does not solve the problem.
Is this just a deficiency of the Windows filesystem (NTFS) or is there a way to configure docker volumes to have the required permissions to set utime.