docker ubuntu sourceing after starting image - docker

I built myself an image for ROS. I run it while mounting my original home on the host and some tricks to get graphics as well. After starting the shell inside docker I always need to execute two source commands. One of the files to be sourced are actually inside the container, but the other resides in my home, which only gets mounted on starting the container. I would have these two files sourced automatically.
I tried adding
RUN bash -c "source /opt/ros/indigo/setup.bash"
to the image file, but this did not actually source it. Using CMD instead of run didn't drop me into the container's shell (I assume it finished executing source and then exited?). I don't even have an idea how to source the file that is only available after startup. What would I need to do?

TL;DR: you need to perform this step as part of your CMD or ENTRYPOINT, and for something like a source command, you need a step after that in the shell to run your app, or whatever shell you'd like. If you just want a bash shell as your command, then put your source command inside something like your .bashrc file. Or you can run something like:
bash -c "source /opt/ros/indigo/setup.bash && bash"
as your command.
One of the files to be sourced are actually inside the container, but the other resides in my home, which only gets mounted on starting the container.
...
I tried adding ... to the image file
Images are built using temporary containers that only see your Dockerfile instructions and the context sent with that to run the build. Containers use that built image and all of your configuration, like volumes, to run your application. There's a hard divider between those two steps, image build and container run, and your volumes are not available during that image build step.
Each of those RUN steps being performed for the image build are done in a temporary container that only stores the output of the filesystem when it's finished. Changes to your environment, a cd into another directory, spawned processes or services in the background, or anything else not written to the filesystem when the command spawned by RUN exits, will be lost. This is one reason you will see commands chained together in a single long RUN command, and it's why you have ENV and WORKDIR commands in the Dockerfile.

Related

How to Run a Command in a Container Using Local Input Files without Copying

I am new to docker and containers. I have a container consisting of an MRI analysis software. Within this container are many other software the main software draws its commands from. I would like to run a single command from one of the softwares in this container using research data that is located on an external hard drive which is plugged into my local machine that is running docker.
I know there is a cp command for copying files (such as scripts) into containers and most other questions along these lines seem to recommend copying the files from your local machine into the container and then running the script (or whatever) from the container. In my case I need the container to access data from separate folders in a directory structure and copying over the entire directory is not feasible since it is quite large.
I honestly just want to know how I can run a single command inside the docker using inputs present on my local machine. I have run docker ps to get the CONTAINER_ID which is d8dbcf705ee7. Having looked into executing commands inside containers I tried the following command:
docker exec d8dbcf705ee7 /bin/bash -c "mcflirt -in /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold.nii -out sub-S06V1A_task-compound_run-01_bold_mcf_COMMAND_TEST.nii.gz -reffile /Volumes/DISS/FMRIPREP_TMP/sub-S06V1A_dof6_ver1.2.5/fmriprep_wf/single_subject_S06V1A_wf/func_preproc_task_compound_run_01_wf/bold_reference_wf/gen_ref/ref_image.nii.gz -mats -plots"
mcflirt is the command I want to run inside the container. I believe the exec command would do what I hope since if I run docker exec d8dbcf705ee7 /bin/bash -c "mcflirt" I will get help output for the mcflirt command which is the expected outcome in that case. The files inside of the /Volume/... paths are the files on my local machine I would like to access. I understand that the location of the files is the problem since I cannot tab complete the paths within this command; when I run this I get the following output:
Image Exception : #22 :: ERROR: Could not open image /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
Can anyone point me in the right direction?
So if I got you right, you need to execute some shell script and provide the context (like local files).
The way is straightforward.
Lets say your script and all needed files are located in /hello folder of your host PC (no matter really if they are stored together or not, just showing the technique).
/hello
- runme.sh
- datafile1
- datafile1
You mount this folder into your container to make the files accessible inside. If you dont need container to modify them, better mount in readonly mode.
You launch docker like this:
docker run -it -v /hello:/hello2:ro ubuntu /hello2/runme.sh
And that's it! Your script runme.sh gets executed inside container and it has access to nearby files. Thanks to -v /hello:/hello2:ro directive. It maps host's folder /hello into container's folder /hello2 in readonly ro mode.
Note you can have same names, I've just differed them to show the difference.

Persist Docker container after running short-lived scripts

I have two Python scripts that I'm running in a container. The first script loads some data from disk, does some manipulation, and then saves the output in the container. The second script does a similar thing, again saving output on the container. However, once these scripts are done running, my container is basically "done" and Kubernetes basically re-deploys the same build, forever. I want to be able to run these scripts once but be able to access those results whenever, without the container continuously being built.
Here's my Dockerfile, generally:
FROM X
...
RUN python3 script1.py
RUN python3 script2.py
Currently I'm trying CMD sleep infinity to try to access the container through the shell later, but that isn't working. I've also tried ENTRYPOINT ["sh"], to no avail.
So generally, the Dockerfile I'm now using looks like this:
FROM X
...
RUN python3 script1.py
RUN python3 script2.py
CMD sleep infinity
In Kubernetes/OpenShift you would use a Job. But to save results you will also need to claim a persistent volume and mount it into the Pod for the Job, giving you a place to save the results. You could create a temporary pod later on to access the results from the persistent volume.
You need to use docker-compose here .What you can do mount a /var/opt/logs from your host container , or whichever directory you want, to the same directory inside container where your logs are stored .Then both directories and files will be synced irrespective of your container is up or down .Thats how i do to mount the scripts , which i need when container is up and running.

Docker Parameterize Files Passed Inside

I am trying to pass a directory inside the container, eventually where this can be automated. However I don't see any alternative other than physically editing the Dockerfile and manually typing the specific directory to be added.
Note: I have tried mounted volumes, however that solution doesn't help my issue, as I want to eventually call the container on a directory which will eventually have a script run on the directory in the container--not simply copying the local directory inside the container.
Method 1:
$ --build-arg project_directory=/path/to/dir
ARG project_directory
ADD $project_directory .
My unsuccessful solution assumes that I can use the argument's value as a basic string that the ADD command can interpret just as if I was just manually entering the path.
not simply copying the local directory inside the container
That's exactly what you're doing now, by using ADD $project_directory. If you need to make changes from the container and have them reflected onto the host, use:
docker run -v $host_dir:$container_dir image:tag
The command above launches a new container, and it's quite possible for you to launch it with different directory names. You can do so in a loop, from a jenkins pipeline, a shell script, or whatever suits your development environment.
#!/bin/bash
container_dir=/workspace
for directory in /src /realsrc /kickasssrc
do
docker run -v $directory:$container_dir image:tag
done

Difference between Docker Build and Docker Run

If I want to run a python script in my container what is the point in having the RUN command, if I can pass in an argument at build along with running the script?
Each time I run the container I want x.py to be run on an ENV variable passed in during the build stage.
If I were to use Swarm, and the only goal was to run the x.py script, swarm would only be building nodes, rather than building and eventually running, since the CMD and ENTRYPOINT instructions only happen at run time.
Am I missing something?
The docker build command creates an immutable image. The docker run command creates a container that uses the image as a base filesystem, and other metadata from the image is used as defaults to run that image.
Each RUN line in a Dockerfile is used to add a layer to the image filesystem in docker. Docker actually performs that task in a temporary container, hence the selection of the confusing "run" term. The only thing preserved from that RUN command are the filesystem changes, running processes, changes to environment variables, shell settings like the current working directory, are all lost when the temporary container is cleaned up at the completion of the RUN command.
The ENTRYPOINT and CMD value are used to specify the default command to run when the container is started. When both are defined, the result is the value of the entrypoint is run with the value of the cmd appended as a command line argument. The value of CMD is easily overridden at the end of the docker run command line, so by using both you can get easy to reconfigure containers that run the same command with different user input parameters.
If the command you are trying to run needs to be performed every time the container starts, rather than being stored in the immutable image, then you need to perform that command in your ENTRYPOINT or CMD. This will add to the container startup time, so if the result of that command can be stored as a filesystem change and cached for all future containers being run, you want to make that setting in a RUN line.

Can I run a bash script from a file in a separate docker volume before a container starts?

I have a situation where i need to source a bash environment file that lives in a separate volume (pulled in via volumes_from in docker-compose) when a container starts so that all future commands run on the container will execute under that bash environment (it runs some scripts and sets a lot of dynamic variables pulled in from other places). The reason I'm using a volume instead of just adding this command directly to the image is because the environment file I need to include is outside the Dockerfile context, and Dockerfiles don't support that.
I tried adding a source /path/to/volume/envfile line in to the root user's .bashrc file in the hopes that it would be run when the container started, but that didn't work. I'm assuming that's because the volumes aren't actually mounted until after the container / shell has started and .bashrc commands have already run (which makes sense).
Does anyone have any idea on how I can accomplish something like this? I'm open to alternative methods, however the one thing I can't change here is moving the file I need inside of the Docker context, as that would break quite a number of other things.
My (slightly edited) Dockerfile and docker-compose.yml files: https://gist.github.com/joeellis/235d90799eb647ab00ec
EDIT: And as a test, I'm trying to run a rake db:create:all on the container, like docker-compose run app rake db:create:all which is returning an error that the environment file I need cannot be found / loaded. Interestingly enough, if I shell into the container and run the command, it all seems to work just great. So maybe when a container is given a command via run, it doesn't necessarily open up a shell, but uses something else?
The problem is that the shell within which your /src/app/bin/start-app is ran is not an interactive shell => .bashrc is not read!
You can fix this by doing this:
Add the env file to instead:
/root/.profile
RUN echo "source /src/puppet/path/to/file/env/bootstrap" >> /root/.profile
And also run your command as login shell via (sh is bash anyhow for you via the hack in the Dockerfile :) ):
command: "sh -lc '/src/app/bin/start-app'"
as the command. This should work fine :)
The problem really is just missing to source that file because you're running in a non-interactive shell when running via the docker command instruction.
It works when you shell into the container, because bam you got an interactive shell that sources that file :)

Resources