I'm not sure that I'm trying to do it the right way, but I would like to use docker.io as a way to package some programs that need to be run from the host.
However these applications take filenames as arguments and need to have at least read access. Some other applications generate files as output and the user expects to retrieve those files.
What is the docker way of dealing with files as program parameters?
Start Docker with a mounted volume and use this to directory to manipulate files.
See: https://docs.docker.com/engine/tutorials/dockervolumes/
If you have apps that require args when they're run then you can just inject your parameters as environment variables when you run your docker container
e.g.
docker run -e ENV_TO_INJECT=my_value .....
Then in your entrypoint (or cmd) make sure you just run a shell script
e.g. (in Dockerfile)
CMD["/my/path/to/run.sh"]
Then in your run.sh file that gets run at container launch you can just access the environment variables
e.g.
./runmything.sh $ENV_TO_INJECT
Would that work for you?
Related
I am using Docker to containerize a Python script. If Docker wasn't in the picture, I would want pass a file path to the script, which would proceed to work on that file.
python coolscript.py data.csv
As a Docker novice, I'm not sure sure how to accomplish this. Currently, I am automatically executing the script when the container launches.
docker run coolcontainer python coolscript.py data.csv
Since the data.csv file path isn't known when the image is built, its not imported into the container and I cant seem to access it. I've seen some forums saying to mount the host filesystem, but that seems overkill since I just want one file. Is there a way to just send that one file into the container at runtime? How would you be architecting this?
The -v option for bind mounts then should do the trick:
docker container run -v /my/host/path:/my/container/path coolcontainer python /my/container/path/coolscript.py /my/container/path/data.csv
Place both files in /my/host/path
I am new to docker and containers. I have a container consisting of an MRI analysis software. Within this container are many other software the main software draws its commands from. I would like to run a single command from one of the softwares in this container using research data that is located on an external hard drive which is plugged into my local machine that is running docker.
I know there is a cp command for copying files (such as scripts) into containers and most other questions along these lines seem to recommend copying the files from your local machine into the container and then running the script (or whatever) from the container. In my case I need the container to access data from separate folders in a directory structure and copying over the entire directory is not feasible since it is quite large.
I honestly just want to know how I can run a single command inside the docker using inputs present on my local machine. I have run docker ps to get the CONTAINER_ID which is d8dbcf705ee7. Having looked into executing commands inside containers I tried the following command:
docker exec d8dbcf705ee7 /bin/bash -c "mcflirt -in /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold.nii -out sub-S06V1A_task-compound_run-01_bold_mcf_COMMAND_TEST.nii.gz -reffile /Volumes/DISS/FMRIPREP_TMP/sub-S06V1A_dof6_ver1.2.5/fmriprep_wf/single_subject_S06V1A_wf/func_preproc_task_compound_run_01_wf/bold_reference_wf/gen_ref/ref_image.nii.gz -mats -plots"
mcflirt is the command I want to run inside the container. I believe the exec command would do what I hope since if I run docker exec d8dbcf705ee7 /bin/bash -c "mcflirt" I will get help output for the mcflirt command which is the expected outcome in that case. The files inside of the /Volume/... paths are the files on my local machine I would like to access. I understand that the location of the files is the problem since I cannot tab complete the paths within this command; when I run this I get the following output:
Image Exception : #22 :: ERROR: Could not open image /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
Can anyone point me in the right direction?
So if I got you right, you need to execute some shell script and provide the context (like local files).
The way is straightforward.
Lets say your script and all needed files are located in /hello folder of your host PC (no matter really if they are stored together or not, just showing the technique).
/hello
- runme.sh
- datafile1
- datafile1
You mount this folder into your container to make the files accessible inside. If you dont need container to modify them, better mount in readonly mode.
You launch docker like this:
docker run -it -v /hello:/hello2:ro ubuntu /hello2/runme.sh
And that's it! Your script runme.sh gets executed inside container and it has access to nearby files. Thanks to -v /hello:/hello2:ro directive. It maps host's folder /hello into container's folder /hello2 in readonly ro mode.
Note you can have same names, I've just differed them to show the difference.
In my Ubuntu Server I have following directory structure: /home/docker/groovy. In this location I have simple groovy file. On Docker it is running container groovy_repo_1.
After I entered groovy directory I wanted to perform such script on container:
docker exec groovy_repo_1 docker.groovy
Output:
rpc error: code = 2 desc = oci runtime error: exec failed:
container_linux.go:247: starting container process caused "exec:
\"docker.groovy\": executable file not found in $PATH"
Why is it happen?
Docker works with long-lived immutable images and short-lived containers. If you have a script or any other sort of program you want to run, the best practice is generally to package it into an image and then run a container off of it. There is already a standard groovy image so your Dockerfile can be something as basic as:
FROM groovy:2.6
RUN mkdir /home/groovy/scripts
WORKDIR /home/groovy/scripts
COPY docker.groovy .
CMD ["groovy", "docker.groovy"]
You can develop and test your application locally, then use Docker to deploy it. Especially if you're looking at multi-host deployment solutions like docker-swarm or kubernetes it's important that the image be self-contained and has the script included in it.
Your server and your container have different filesystems, unless you specify otherwise mounting a server folder on a container folder with --volume command.
Here you expect your container to know about docker.groovy file juste because you run the command in the server folder containing the file.
One way to do this would be to start a container with your current server folder mounted in a random in your container, and run the groovy script as an entrypoint. Something like this (untested)
docker run -v .:/random groovy_repo_1 /random/docker.groovy
Does the file exist... in the path and inside the container?
The path inside the container may be different from the path on your host. You can update the PATH environment variable during the build, or you can call the binary with a full path, e.g. /home/container_user/scripts/docker.groovy (we don't know where this file is inside your container from the question provided). That path needs to be to the script inside of your container, docker doesn't have direct access to the host filesystem, but you can run your container with a volume mount to bind mount a path on the host into the container.
If it is a shell script, check the first line, e.g. #!/bin/bash
You may have a groovy command at the top of your script. This binary must exist inside the container, and be in your path that's defined inside the container.
Check for windows linefeeds on linux shell scripts
If the script was written on windows, you may have the wrong linefeeds in the script. It will look for groovy\r instead of groovy and the first command won't exist.
If it is a binary, there is likely a missing library
You'll see this if the groovy binary was added with something like a COPY command, instead of compiling locally or installing from the package manager. You can use ldd /path/to/groovy inside the container to inspect the linked libraries.
This list is from my DC2018 presentation: https://sudo-bmitch.github.io/presentations/dc2018/faq-stackoverflow.html#59
I am trying to pass a directory inside the container, eventually where this can be automated. However I don't see any alternative other than physically editing the Dockerfile and manually typing the specific directory to be added.
Note: I have tried mounted volumes, however that solution doesn't help my issue, as I want to eventually call the container on a directory which will eventually have a script run on the directory in the container--not simply copying the local directory inside the container.
Method 1:
$ --build-arg project_directory=/path/to/dir
ARG project_directory
ADD $project_directory .
My unsuccessful solution assumes that I can use the argument's value as a basic string that the ADD command can interpret just as if I was just manually entering the path.
not simply copying the local directory inside the container
That's exactly what you're doing now, by using ADD $project_directory. If you need to make changes from the container and have them reflected onto the host, use:
docker run -v $host_dir:$container_dir image:tag
The command above launches a new container, and it's quite possible for you to launch it with different directory names. You can do so in a loop, from a jenkins pipeline, a shell script, or whatever suits your development environment.
#!/bin/bash
container_dir=/workspace
for directory in /src /realsrc /kickasssrc
do
docker run -v $directory:$container_dir image:tag
done
In my docker-compose, I am mounting a local folder to a folder for Docker. I can see and use the mounted volume with CMD in the Dockerfile, but not with RUN. RUN seems to be a totally clean layer from the docs. Is there a way to have RUN be able to use mount points specified in the docker-compose file?
You can't mount volumes during the docker build process, regardless of whether you use docker itself, docker-compose, or some other tool. The whole idea is that the build process is supposed to be as indepdent of your environment as possible, so that the resulting images have no dependencies on your local system and can be more easily shared.
There are generally alternate ways of approaching whatever problem you're trying to solve that do not require trying to expose data into your build process.
Actually there's a big difference between CMD and RUN
CMD is used to provide arguments or command which is execute when you start the container
RUN is used to provide a command to execute to create a new layer.
In short: volumes are not available during build step (when RUN is executed).
Docker containers have two ways of providing "external" files:
In build step, context is passed.
In run step, container layer + volumes are used.
See for CMD:
https://docs.docker.com/engine/reference/builder/#cmd
https://docs.docker.com/engine/reference/builder/#entrypoint
See for RUN:
https://docs.docker.com/engine/reference/builder/#run
For context see:
https://docs.docker.com/engine/reference/builder/#usage
(docker-compose) https://docs.docker.com/compose/compose-file/#build