Monitoring file changes in Docker volumes - docker

I have a docker container that is running a python script: waiting for input requests and processing data accordingly.
Since I am using docker for development, I would like that, whenever I change the source code of that python file (in my machine, not the container), the container would stop the python script and relaunch it with the new code. Because right now I have to manually stop the container and relaunch it. I could also monitor the file changes on my side (rather than inside the container) but I would like to avoid that and do it within the container itself.
I am using docker-compose's volumes option to share the source code between my FS and the container's.
To monitor the file changes, I've been trying to use the watchmedo shell utility from the the watchdog python module. I just have this weird problem that I can't notice the file changes of that python source file unless I am editing it from the inside of the container and not in my local FS, even though they are mounted with the volumes option.
I get the feeling that this is something to do with how docker works and maybe the volumes thing too. I've been trying to read up on it online, but didn't get much luck. Any ideas? I'm totally stuck!
EDIT: Here's a gif that better explains it. The top to panes are connected to the same container and the bottom two to my local machine. All the panes are pointed to the same folder.

You could have your container run something like this ( need inotify installed ):
while true
do
inotifywait -e create -e modify /path/to/python/script
pkill python
python /path/to/python/script
done
Basically wait for changes on the file, kill python on the machine, run script again.
If the python script is not running in the background/deamonized in any way you could use an & like so: the python /path/to/python/script &
Put this into run.sh, add something like this to your Dockerfile
COPY run.sh /run.sh
CMD ["bash", "-l", "/run.sh"]
... and you should be good.

Related

Can I disable "-t" option from docker image

we would like to put a docker image into our client's internet off machine.
To prevent the possible code leak (python script) in some level, my idea would be to make an image that only leave specific port open and write socket programs to pass command/data.
Before I put it into practice, I need to make sure user cannot access the running container using other method, include the "-t" option.
So as a newbie, I would like to ask if this is possible. Thanks a lot!
No: a Docker image has no way to prevent (or require) any specific runtime options.
In the particular case of trying to hide the contents of a Docker image, anyone who can run any docker command at all can trivially get unlimited root access on the host, and can docker run any command; even without being able to use the -t option they can docker run ... tar cvf - /app to copy the source out, or they can probably find the source code in the /var/lib/docker tree with some poking around.
The only way to prevent interactive shell access to a container is to not have a shell in the image at all (and even then, you could bind-mount a Busybox binary into the container and run that). The only way to prevent source code from being copied out of an image is for it to not be there at all. This means you have to use a compiled language (Go, C++, Java, Rust) if this is a concern for you.

How to Run a Command in a Container Using Local Input Files without Copying

I am new to docker and containers. I have a container consisting of an MRI analysis software. Within this container are many other software the main software draws its commands from. I would like to run a single command from one of the softwares in this container using research data that is located on an external hard drive which is plugged into my local machine that is running docker.
I know there is a cp command for copying files (such as scripts) into containers and most other questions along these lines seem to recommend copying the files from your local machine into the container and then running the script (or whatever) from the container. In my case I need the container to access data from separate folders in a directory structure and copying over the entire directory is not feasible since it is quite large.
I honestly just want to know how I can run a single command inside the docker using inputs present on my local machine. I have run docker ps to get the CONTAINER_ID which is d8dbcf705ee7. Having looked into executing commands inside containers I tried the following command:
docker exec d8dbcf705ee7 /bin/bash -c "mcflirt -in /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold.nii -out sub-S06V1A_task-compound_run-01_bold_mcf_COMMAND_TEST.nii.gz -reffile /Volumes/DISS/FMRIPREP_TMP/sub-S06V1A_dof6_ver1.2.5/fmriprep_wf/single_subject_S06V1A_wf/func_preproc_task_compound_run_01_wf/bold_reference_wf/gen_ref/ref_image.nii.gz -mats -plots"
mcflirt is the command I want to run inside the container. I believe the exec command would do what I hope since if I run docker exec d8dbcf705ee7 /bin/bash -c "mcflirt" I will get help output for the mcflirt command which is the expected outcome in that case. The files inside of the /Volume/... paths are the files on my local machine I would like to access. I understand that the location of the files is the problem since I cannot tab complete the paths within this command; when I run this I get the following output:
Image Exception : #22 :: ERROR: Could not open image /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
Can anyone point me in the right direction?
So if I got you right, you need to execute some shell script and provide the context (like local files).
The way is straightforward.
Lets say your script and all needed files are located in /hello folder of your host PC (no matter really if they are stored together or not, just showing the technique).
/hello
- runme.sh
- datafile1
- datafile1
You mount this folder into your container to make the files accessible inside. If you dont need container to modify them, better mount in readonly mode.
You launch docker like this:
docker run -it -v /hello:/hello2:ro ubuntu /hello2/runme.sh
And that's it! Your script runme.sh gets executed inside container and it has access to nearby files. Thanks to -v /hello:/hello2:ro directive. It maps host's folder /hello into container's folder /hello2 in readonly ro mode.
Note you can have same names, I've just differed them to show the difference.

docker ubuntu sourceing after starting image

I built myself an image for ROS. I run it while mounting my original home on the host and some tricks to get graphics as well. After starting the shell inside docker I always need to execute two source commands. One of the files to be sourced are actually inside the container, but the other resides in my home, which only gets mounted on starting the container. I would have these two files sourced automatically.
I tried adding
RUN bash -c "source /opt/ros/indigo/setup.bash"
to the image file, but this did not actually source it. Using CMD instead of run didn't drop me into the container's shell (I assume it finished executing source and then exited?). I don't even have an idea how to source the file that is only available after startup. What would I need to do?
TL;DR: you need to perform this step as part of your CMD or ENTRYPOINT, and for something like a source command, you need a step after that in the shell to run your app, or whatever shell you'd like. If you just want a bash shell as your command, then put your source command inside something like your .bashrc file. Or you can run something like:
bash -c "source /opt/ros/indigo/setup.bash && bash"
as your command.
One of the files to be sourced are actually inside the container, but the other resides in my home, which only gets mounted on starting the container.
...
I tried adding ... to the image file
Images are built using temporary containers that only see your Dockerfile instructions and the context sent with that to run the build. Containers use that built image and all of your configuration, like volumes, to run your application. There's a hard divider between those two steps, image build and container run, and your volumes are not available during that image build step.
Each of those RUN steps being performed for the image build are done in a temporary container that only stores the output of the filesystem when it's finished. Changes to your environment, a cd into another directory, spawned processes or services in the background, or anything else not written to the filesystem when the command spawned by RUN exits, will be lost. This is one reason you will see commands chained together in a single long RUN command, and it's why you have ENV and WORKDIR commands in the Dockerfile.

Can I run a bash script from a file in a separate docker volume before a container starts?

I have a situation where i need to source a bash environment file that lives in a separate volume (pulled in via volumes_from in docker-compose) when a container starts so that all future commands run on the container will execute under that bash environment (it runs some scripts and sets a lot of dynamic variables pulled in from other places). The reason I'm using a volume instead of just adding this command directly to the image is because the environment file I need to include is outside the Dockerfile context, and Dockerfiles don't support that.
I tried adding a source /path/to/volume/envfile line in to the root user's .bashrc file in the hopes that it would be run when the container started, but that didn't work. I'm assuming that's because the volumes aren't actually mounted until after the container / shell has started and .bashrc commands have already run (which makes sense).
Does anyone have any idea on how I can accomplish something like this? I'm open to alternative methods, however the one thing I can't change here is moving the file I need inside of the Docker context, as that would break quite a number of other things.
My (slightly edited) Dockerfile and docker-compose.yml files: https://gist.github.com/joeellis/235d90799eb647ab00ec
EDIT: And as a test, I'm trying to run a rake db:create:all on the container, like docker-compose run app rake db:create:all which is returning an error that the environment file I need cannot be found / loaded. Interestingly enough, if I shell into the container and run the command, it all seems to work just great. So maybe when a container is given a command via run, it doesn't necessarily open up a shell, but uses something else?
The problem is that the shell within which your /src/app/bin/start-app is ran is not an interactive shell => .bashrc is not read!
You can fix this by doing this:
Add the env file to instead:
/root/.profile
RUN echo "source /src/puppet/path/to/file/env/bootstrap" >> /root/.profile
And also run your command as login shell via (sh is bash anyhow for you via the hack in the Dockerfile :) ):
command: "sh -lc '/src/app/bin/start-app'"
as the command. This should work fine :)
The problem really is just missing to source that file because you're running in a non-interactive shell when running via the docker command instruction.
It works when you shell into the container, because bam you got an interactive shell that sources that file :)

Is it possible to use a "blank" docker container without any install on it?

I'm new to Docker and I think having understood that Docker is a Software virtualization tool (by opposition to OS virtualization). I understand, by this image, that Docker provides a very blank environment with a given file structure and is executing on the kernel Host. What we need to do is to put our application and its dependencies (with no OS) to have a very light portable container of our app.
But it seems there is a dark side of Docker : each Dockerfile begins with a "FROM ".
I saw this and this but I'm not sure to understand. It sounds that Docker is near an kind of simplified OS virtualizer.
I was interesting in the advantage of images size. But if we have to install an OS on each image my "portable" application will be quite heavy quickly.
Is there really no way to use a "blank image" ?
You can start with FROM scratch which is an empty filesystem.
Please see the section on Creating a Base Image if you'd like to spin up your own minimal root file system.
You might be surprised how many dependencies your application actually has on the root file system, and in the end, it is usually more efficient to use one of the standard root file systems in your FROM statement, as Charles Duffy commented above.
empty/Dockerfile
FROM scratch
WORKDIR /
build and check size
docker build empty/ -t empty
docker images | grep empty
This may be a bit too late. But I just had a use case where I needed to create a bare bone container that I could launch as part of multi-container docker-compose and get into it afterwards via /bin/bash. Keep in mind, a docker container must run a service and the container will be in existence only for as long as the service is running. So, I created this container with just python in it. I copied a 2 line python script that just makes it sleep. Here's what I did.
1. Create the python script wait_service.py with the following code:
import time
time.sleep(1000)
2. Create the Dockerfile with just the following lines:
FROM python:2.7
RUN mkdir -p /test
WORKDIR /test
COPY wait_service.py /test/
CMD python wait_service.py
3. Build and run the container. Using the container id, I could then get inside it. Please adjust the sleep time based on how long you want to keep this container.
Your application haveto have some underlying OS, without, there is no way for it to start..
I think the most basic one in the docker index is busybox, so a FROM busybox will give you a very minimal setup.
Docker is also using a lot of caching for each of its layers. So every docker container that uses FROM centos:centos7 at the top will only use 1 single set of minimal centos7 image.
The base images are very minimalistic, so it is nothing to worry about..

Resources