So I have a command that produces a huge file and forwards it to another command in a chain through stdout and stdin.
The problem is that this procedure generates a file in /dev corresponding to the stdout which contains the entire file generated from the first command, but this file is too big and hits the memory limit of the container.
I've tried to mount the /dev folder as a volume, but the container does not start, and for other reasons, I would prefer to avoid changing the code so that it writes the output on a file and give the file to the second command.
So does anyone has other ideas to solve this problem?
Related
This might come across as a stupid question, but I am unable to figure something about docker volumes. Going through the official documentation I can see that we can map the host machine file system on the container for persistent storage. Following the instruction I was successfully able to mount a folder on my container.
Once I exec bash into the container, I can see the mapped directory structure there as expected. My question is, how is the data mapped between these two paths, that is from the container to the mount volume on host OS. Is the data duplicated or the container directly stores the data on the volume on host OS and the mapped paths are shown for something like symlink ?
This question comes across since we are trying to maintain a large amount of data on a mounted disk but accessible by the container, with the assumption that mounting volume would directly store the data on the disk and nothing on the container.
The Docker documentation refers to this type of mount as a "bind mount"; that's also a technical Linux term that allows one part of the filesystem to also appear somewhere else, and there's a mount --bind option you can use outside of Docker (usually a pretty specialized option).
On native Linux, the host content and the container-visible content are literally the exact same disk content. If you have a bind-mounted host directory or a named Docker volume mounted over a container directory, all reads and writes will use that mounted content, and in fact nothing will be written to the container filesystem on that path.
You mention symlinks; these are always resolved as filenames in their respective filesystem space. If the mounted filesystem has a symlink passwd -> /etc/passwd then reading it will yield the host's password file on the host, and the container's password file inside the container. If it has a symlink f -> ../f then it will look at the directory above the mount point in whichever the local filesystem is.
On non-Linux this process is a little bit more technically complex since there is typically a Linux virtual machine involved in the mix. This usually manifests as file synchronization appearing slow. For data you don't need to directly access as a human, storing it in a named Docker volume will usually be faster.
I have a little problem with the docker.
I'm trying to use a volume share with my computer. I can see the files on my computer, but they are empty from my container.
I tried to create a file in the /root of my container (outside the shared volume) and I can see the file without any problem.
If I do echo test > test.txt (in my shared volume), the file content is empty.
I execute this command :
docker run -v "D:\My App:/home/app" -it MyImage /bin/bash
In the /home/app folder, I can see the files on my computer. But if I do:
cat /home/app/test.txt
It tells me there's nothing in the file. While there is a text (the file exists)
If I create a file from my container, in the shared volume, I find it on my computer (and it is not empty).
If I create a file from my computer, I find it in the container, but it is empty when I try to display it.
Currently, when I do a cat test.txt, it doesn't display anything.
This should display this is a test
Do check first your Docker for Windows settings:
If your D:\ drive is not shared, you won't see much in your container.
docker/for-win issue 25 points out multiple possible issues:
If you are using Docker Toolbox:
In my case Docker Toolbox created a VM named default in Virtualbox and I added the Shared Folder in the VM; Virtualbox -> default (VM) -> Settings -> Shared Folders -> Add:
Then you can specify the paths in both your machine and the mapped path in the VM, like:
The 1st field is the path in your machine, like D:\my\app
The 2nd is the path in the VM, like /my-vm/app
Choose to Mount Automatically
Another:
One of the issues I had when learning, was to try and mount a volume in my container, but then have a folder that conflicted.
For example, I'd make my workingdir /foo/bar, then try to use a volume for /foo/bar/private as well, BUT already have a folder called private in my initial mount.
I would see no error, but I'd see the first folder and not my 2nd volume
Or:
docker/for-win issue 2151: "Volumes mounted from a Linux WSL instance don't resolve in container".
It refers to "how to use Docker with WSL".
The last thing we need to do is set things up so that volume mounts work. This tripped me up for a while because check this out…
When using WSL, Docker for Windows expects you to supply your volume paths in a format that matches this: /c/Users/nick/dev/myapp.
But, WSL doesn’t work like that. Instead, it uses the /mnt/c/Users/nick/dev/myapp format.
Honestly I think Docker should change their path to use /mnt/c because it’s more clear on what’s going on, but that’s a discussion for another time.
I have a mount a directory in an input mount of Docker container (Directory contains my files).
Inside the container, I list mounted files at the beginning and at the middle of my script. The problem is some files do not get mounted.
Sometimes, I can't see some of them at the beginning of my script or All of them are in input mount at the beginning of script but some of them are not in input mount at the middle of script sometimes.
What is the problem?
P.S:I mostly faced this problem while bulk launch
I am new to docker and containers. I have a container consisting of an MRI analysis software. Within this container are many other software the main software draws its commands from. I would like to run a single command from one of the softwares in this container using research data that is located on an external hard drive which is plugged into my local machine that is running docker.
I know there is a cp command for copying files (such as scripts) into containers and most other questions along these lines seem to recommend copying the files from your local machine into the container and then running the script (or whatever) from the container. In my case I need the container to access data from separate folders in a directory structure and copying over the entire directory is not feasible since it is quite large.
I honestly just want to know how I can run a single command inside the docker using inputs present on my local machine. I have run docker ps to get the CONTAINER_ID which is d8dbcf705ee7. Having looked into executing commands inside containers I tried the following command:
docker exec d8dbcf705ee7 /bin/bash -c "mcflirt -in /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold.nii -out sub-S06V1A_task-compound_run-01_bold_mcf_COMMAND_TEST.nii.gz -reffile /Volumes/DISS/FMRIPREP_TMP/sub-S06V1A_dof6_ver1.2.5/fmriprep_wf/single_subject_S06V1A_wf/func_preproc_task_compound_run_01_wf/bold_reference_wf/gen_ref/ref_image.nii.gz -mats -plots"
mcflirt is the command I want to run inside the container. I believe the exec command would do what I hope since if I run docker exec d8dbcf705ee7 /bin/bash -c "mcflirt" I will get help output for the mcflirt command which is the expected outcome in that case. The files inside of the /Volume/... paths are the files on my local machine I would like to access. I understand that the location of the files is the problem since I cannot tab complete the paths within this command; when I run this I get the following output:
Image Exception : #22 :: ERROR: Could not open image /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
Can anyone point me in the right direction?
So if I got you right, you need to execute some shell script and provide the context (like local files).
The way is straightforward.
Lets say your script and all needed files are located in /hello folder of your host PC (no matter really if they are stored together or not, just showing the technique).
/hello
- runme.sh
- datafile1
- datafile1
You mount this folder into your container to make the files accessible inside. If you dont need container to modify them, better mount in readonly mode.
You launch docker like this:
docker run -it -v /hello:/hello2:ro ubuntu /hello2/runme.sh
And that's it! Your script runme.sh gets executed inside container and it has access to nearby files. Thanks to -v /hello:/hello2:ro directive. It maps host's folder /hello into container's folder /hello2 in readonly ro mode.
Note you can have same names, I've just differed them to show the difference.
I have a very large file in my docker container (it's a virtualbox image) which --- unfortunately -- must be modified as part of running it. Docker's copy-on-write policy works against me here and unfortunately any mutation/copying of the file takes about 10 minutes, compared to about 10 seconds to copy the same file on the host.
Can anything be done to speed up the creation/copy of very large files within a docker container? Note that this is an entirely transient file that I do not need to persist after the container is closed.
Declare the folder the file is in as a volume. If you do this, the copy-on-write-policy is not applied. Note that you don't have to mount this volume to the host system, it is sufficient to declare it as a volume.
For more information: https://docs.docker.com/userguide/dockervolumes/