I am trying to pass a directory inside the container, eventually where this can be automated. However I don't see any alternative other than physically editing the Dockerfile and manually typing the specific directory to be added.
Note: I have tried mounted volumes, however that solution doesn't help my issue, as I want to eventually call the container on a directory which will eventually have a script run on the directory in the container--not simply copying the local directory inside the container.
Method 1:
$ --build-arg project_directory=/path/to/dir
ARG project_directory
ADD $project_directory .
My unsuccessful solution assumes that I can use the argument's value as a basic string that the ADD command can interpret just as if I was just manually entering the path.
not simply copying the local directory inside the container
That's exactly what you're doing now, by using ADD $project_directory. If you need to make changes from the container and have them reflected onto the host, use:
docker run -v $host_dir:$container_dir image:tag
The command above launches a new container, and it's quite possible for you to launch it with different directory names. You can do so in a loop, from a jenkins pipeline, a shell script, or whatever suits your development environment.
#!/bin/bash
container_dir=/workspace
for directory in /src /realsrc /kickasssrc
do
docker run -v $directory:$container_dir image:tag
done
Related
Anyone know if Docker somehow caches files/file systems? And if it does, is there a way to suppress that or force it to regenerate/refresh the cache. I am editing my files outside my docker image, but the directory inside the docker image doesn't seem to include them in it. However, outside the docker image the "same" directory does include the files. The only thing that makes sense to me is that docker has an internal "copy" of the directory and isn't going to the disk, so it sees an outdated copy of the directory before the file was added.
Details:
I keep my "work" files in a directory on a separate partition (/Volumes/emacs) on my MacBook, i.e.:
/Volumes/emacs/datapelago
I do my editing in emacs on the MacBook, but not in the docker container. I have a link to that directory called:
/projects
And I might edit or create a file called:
/projects/nuc-440/qflow/ToAttribute.java
In the meantime I have a docker container I created this way:
docker container create -p 8080:8080 -v /Volumes/emacs/datapelago:/projects -v /Users/christopherfclark:/Users/cfclark --name nuc-440 gradle:7.4.2-jdk17 tail -f /dev/null
I keep a shell running in that container:
docker container exec -it nuc-440 /bin/bash
cd /projects/nuc-440
And after making changes I run the build/test sequence:
gradle clean;gradle build;gradle test
However, recently I have noticed that when I make changes, add files, they don't always get reflected inside the docker container, which of course can cause the build to fail or the test not to pass etc.
Thus, this question.
I'd rather not have to start/stop the container each time and instead just keep it running and tell it to "refetch" the projects/nuc-440 directory and its children.
I restarted the docker container and it is now "tracking" file changes again. That is I can make changes in MacOS and they are reflected inside docker with no additional steps. I don't seem to have to continually restart it. It must have gotten into a "wierd" state. However, I don't have any useful details beyond that. Sorry.
I am new to docker and containers. I have a container consisting of an MRI analysis software. Within this container are many other software the main software draws its commands from. I would like to run a single command from one of the softwares in this container using research data that is located on an external hard drive which is plugged into my local machine that is running docker.
I know there is a cp command for copying files (such as scripts) into containers and most other questions along these lines seem to recommend copying the files from your local machine into the container and then running the script (or whatever) from the container. In my case I need the container to access data from separate folders in a directory structure and copying over the entire directory is not feasible since it is quite large.
I honestly just want to know how I can run a single command inside the docker using inputs present on my local machine. I have run docker ps to get the CONTAINER_ID which is d8dbcf705ee7. Having looked into executing commands inside containers I tried the following command:
docker exec d8dbcf705ee7 /bin/bash -c "mcflirt -in /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold.nii -out sub-S06V1A_task-compound_run-01_bold_mcf_COMMAND_TEST.nii.gz -reffile /Volumes/DISS/FMRIPREP_TMP/sub-S06V1A_dof6_ver1.2.5/fmriprep_wf/single_subject_S06V1A_wf/func_preproc_task_compound_run_01_wf/bold_reference_wf/gen_ref/ref_image.nii.gz -mats -plots"
mcflirt is the command I want to run inside the container. I believe the exec command would do what I hope since if I run docker exec d8dbcf705ee7 /bin/bash -c "mcflirt" I will get help output for the mcflirt command which is the expected outcome in that case. The files inside of the /Volume/... paths are the files on my local machine I would like to access. I understand that the location of the files is the problem since I cannot tab complete the paths within this command; when I run this I get the following output:
Image Exception : #22 :: ERROR: Could not open image /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
Can anyone point me in the right direction?
So if I got you right, you need to execute some shell script and provide the context (like local files).
The way is straightforward.
Lets say your script and all needed files are located in /hello folder of your host PC (no matter really if they are stored together or not, just showing the technique).
/hello
- runme.sh
- datafile1
- datafile1
You mount this folder into your container to make the files accessible inside. If you dont need container to modify them, better mount in readonly mode.
You launch docker like this:
docker run -it -v /hello:/hello2:ro ubuntu /hello2/runme.sh
And that's it! Your script runme.sh gets executed inside container and it has access to nearby files. Thanks to -v /hello:/hello2:ro directive. It maps host's folder /hello into container's folder /hello2 in readonly ro mode.
Note you can have same names, I've just differed them to show the difference.
I built myself an image for ROS. I run it while mounting my original home on the host and some tricks to get graphics as well. After starting the shell inside docker I always need to execute two source commands. One of the files to be sourced are actually inside the container, but the other resides in my home, which only gets mounted on starting the container. I would have these two files sourced automatically.
I tried adding
RUN bash -c "source /opt/ros/indigo/setup.bash"
to the image file, but this did not actually source it. Using CMD instead of run didn't drop me into the container's shell (I assume it finished executing source and then exited?). I don't even have an idea how to source the file that is only available after startup. What would I need to do?
TL;DR: you need to perform this step as part of your CMD or ENTRYPOINT, and for something like a source command, you need a step after that in the shell to run your app, or whatever shell you'd like. If you just want a bash shell as your command, then put your source command inside something like your .bashrc file. Or you can run something like:
bash -c "source /opt/ros/indigo/setup.bash && bash"
as your command.
One of the files to be sourced are actually inside the container, but the other resides in my home, which only gets mounted on starting the container.
...
I tried adding ... to the image file
Images are built using temporary containers that only see your Dockerfile instructions and the context sent with that to run the build. Containers use that built image and all of your configuration, like volumes, to run your application. There's a hard divider between those two steps, image build and container run, and your volumes are not available during that image build step.
Each of those RUN steps being performed for the image build are done in a temporary container that only stores the output of the filesystem when it's finished. Changes to your environment, a cd into another directory, spawned processes or services in the background, or anything else not written to the filesystem when the command spawned by RUN exits, will be lost. This is one reason you will see commands chained together in a single long RUN command, and it's why you have ENV and WORKDIR commands in the Dockerfile.
I have a docker-compose dev stack. When I run, docker-compose up --build, the container will be built and it will execute
Dockerfile:
RUN composer install --quiet
That command will write a bunch of files inside the ./vendor/ directory, which is then only available inside the container, as expected. The also existing vendor/ on the host is not touched and, hence, out of date.
Since I use that container for development and want my changes to be available, I mount the current directory inside the container as a volume:
docker-compose.yml:
my-app:
volumes:
- ./:/var/www/myapp/
This loads an outdated vendor directory into my container; forcing me to rerun composer install either on the host or inside the container in order to have the up to date version.
I wonder how I could manage my docker-compose stack differently, so that the changes during the docker build on the current folder are also persisted on the host directory and I don't have to run the command twice.
I do want to keep the vendor folder mounted, as some vendors are my own and I like being able to modifiy them in my current project. So only mounting the folders I need to run my application would not be the best solution.
I am looking for a way to tell docker-compose: Write all the stuff inside the container back to the host before adding the volume.
You can run a short side container after docker-compose build:
docker run --rm -v /vendor:/target my-app cp -a vendor/. /target/.
The cp could also be something more efficient like an rsync. Then after that container exits, you do your docker-compose up which mounts /vendor from the host.
Write all the stuff inside the container back to the host before adding the volume.
There isn't any way to do this directly, but there are a few options to do it as a second command.
as already suggested you can run a container and copy or rsync the files
use docker cp to copy the files out of a container (without using a volume)
use a tool like dobi (disclaimer: dobi is my own project) to automate these tasks. You can use one image to update vendor, and another image to run the application. That way updates are done on the host, but can be built into the final image. dobi takes care of skipping unnecessary operations when the artifact is still fresh (based on modified time of files or resources), so you never run unnecessary operations.
I am currently exploring on how we can remove the file/folder resides inside docker container programmatically . I know we can copy files from container to host using docker cp. However, I am looking for something like docker mv or docker rm which allows me to move or remove files/folders inside docker.
The scenario is, We are writing the automated test cases in Java and for one case we need to copy the log file from the server's log folder to assert the log written by the test case. We are using the docker cp to copy the log files. However it contains the log of old test cases as well. So I was thinking if I can remove the log files before executing my test case. It make sure the log I have copied is written by my test case only. Is there any other way around?
You can use below command to remove the files from program running on host
docker exec <container> rm -rf <YourFile>
However if old files exist because the container were never removed, then general practice is to remove the container once the all test suite execution is complete,
New job should create the new container.
In the docker file definition you can use
RUN rm [folder-path]
However anything added using ADD or COPY will still increase the size of your image.
EDIT: For accessing a running container from an external program running on your host try this.
Mount a host folder as a volume when starting the instance.
Run a script or program on the host to delete desired folders on the host which will affect the container file system as well.
You can access the container bash console
#console control over container
docker exec -it a5866aee4e90 bash
Then when you are inside the container you can do anything with console.
Im using this command to find and rename files in my jboss deployments directory. Modify it how you need to serve you. You can delete files also insted of mv use rm
find /home/jboss/jbossas/standalone/deployments -depth -name "*.failed" -exec sh -c 'mv "$1" "${1%.failed}.dodeploy"' _ {} \;