Docker services in image not launching - docker

I am new to docker. I've prepared a docker image. On it, I've installed application as user root. On launch, the root .bashrc contains some lines that be execute. On the machine where I prepare the image, all was running correcting. I saved the image as a tar with the command docker save to a tar file. Using the tar file, I loaded the image on another machine. Using docker, when I start the image using the command docker run, it doesn't execute the root .bashrc. When I execute it manually using source .bashrc it executes but there are services which fails. Any idea why this is happening ? Because I was thinking that the moment you have the image, you load the image in a container, it is expected to work similarly everywhere.

Related

How to run docker-compose with docker image?

I've moved my docker-compose container from the development machine to a server using docker save image-name > image-name.tar and cat image-name.tar | docker load. I can see that my image is loaded by running docker images. But when I want to start my server with docker-compose up, it says that there isn't any docker-compose.yml. And there isn't really any .yml file. So how to do with this?
UPDATE
When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
What you achieve with docker save image-name > image-name.tar and cat image-name.tar | docker load is that you put a Docker image into an archive and extract the image on another machine after that. You could check whether this worked correctly with docker run --rm image-name.
An image is just like a blueprint you can use for running containers. This has nothing to do with your docker-compose.yml, which is just a configuration file that has to live somewhere on your machine. You would have to copy this file manually to the remote machine you wish to run your image on, e.g. using scp docker-compose.yml remote_machine:/home/your_user/docker-compose.yml. You could then run docker-compose up from /home/your_user.
EDIT: Additional info concerning the updated question:
UPDATE When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
Personally, I have never used this approach of transferring a Docker image (but it's cool, didn't know it). What you typically would do is pushing your image to a Docker registry (either the official DockerHub one, or a self-hosted registry) and then pulling it from there.

How to Run a Command in a Container Using Local Input Files without Copying

I am new to docker and containers. I have a container consisting of an MRI analysis software. Within this container are many other software the main software draws its commands from. I would like to run a single command from one of the softwares in this container using research data that is located on an external hard drive which is plugged into my local machine that is running docker.
I know there is a cp command for copying files (such as scripts) into containers and most other questions along these lines seem to recommend copying the files from your local machine into the container and then running the script (or whatever) from the container. In my case I need the container to access data from separate folders in a directory structure and copying over the entire directory is not feasible since it is quite large.
I honestly just want to know how I can run a single command inside the docker using inputs present on my local machine. I have run docker ps to get the CONTAINER_ID which is d8dbcf705ee7. Having looked into executing commands inside containers I tried the following command:
docker exec d8dbcf705ee7 /bin/bash -c "mcflirt -in /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold.nii -out sub-S06V1A_task-compound_run-01_bold_mcf_COMMAND_TEST.nii.gz -reffile /Volumes/DISS/FMRIPREP_TMP/sub-S06V1A_dof6_ver1.2.5/fmriprep_wf/single_subject_S06V1A_wf/func_preproc_task_compound_run_01_wf/bold_reference_wf/gen_ref/ref_image.nii.gz -mats -plots"
mcflirt is the command I want to run inside the container. I believe the exec command would do what I hope since if I run docker exec d8dbcf705ee7 /bin/bash -c "mcflirt" I will get help output for the mcflirt command which is the expected outcome in that case. The files inside of the /Volume/... paths are the files on my local machine I would like to access. I understand that the location of the files is the problem since I cannot tab complete the paths within this command; when I run this I get the following output:
Image Exception : #22 :: ERROR: Could not open image /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
Can anyone point me in the right direction?
So if I got you right, you need to execute some shell script and provide the context (like local files).
The way is straightforward.
Lets say your script and all needed files are located in /hello folder of your host PC (no matter really if they are stored together or not, just showing the technique).
/hello
- runme.sh
- datafile1
- datafile1
You mount this folder into your container to make the files accessible inside. If you dont need container to modify them, better mount in readonly mode.
You launch docker like this:
docker run -it -v /hello:/hello2:ro ubuntu /hello2/runme.sh
And that's it! Your script runme.sh gets executed inside container and it has access to nearby files. Thanks to -v /hello:/hello2:ro directive. It maps host's folder /hello into container's folder /hello2 in readonly ro mode.
Note you can have same names, I've just differed them to show the difference.

Docker :: How to see the packaging inside an image made from Docker for Mac?

I created an image wildfly using Docker Desktop for Mac. Then I tried to run it, however, there seems to be some deployment error when the image is run.
In order to check if the docker file packaged the image properly or not, I would like to debug it by checking its content.
Is there a way to see the contents inside an image made from Docker Desktop for Mac?
docker history shows the commands used to build the image, otherwise, you could docker run the image, using sh as the command to enter into a terminal. Then you can browse the file system as normal
Rather than looking at the image, it would probably be easier to start the image from a shell and introspect it via that.
docker run -it wildfily /bin/sh

Docker Parameterize Files Passed Inside

I am trying to pass a directory inside the container, eventually where this can be automated. However I don't see any alternative other than physically editing the Dockerfile and manually typing the specific directory to be added.
Note: I have tried mounted volumes, however that solution doesn't help my issue, as I want to eventually call the container on a directory which will eventually have a script run on the directory in the container--not simply copying the local directory inside the container.
Method 1:
$ --build-arg project_directory=/path/to/dir
ARG project_directory
ADD $project_directory .
My unsuccessful solution assumes that I can use the argument's value as a basic string that the ADD command can interpret just as if I was just manually entering the path.
not simply copying the local directory inside the container
That's exactly what you're doing now, by using ADD $project_directory. If you need to make changes from the container and have them reflected onto the host, use:
docker run -v $host_dir:$container_dir image:tag
The command above launches a new container, and it's quite possible for you to launch it with different directory names. You can do so in a loop, from a jenkins pipeline, a shell script, or whatever suits your development environment.
#!/bin/bash
container_dir=/workspace
for directory in /src /realsrc /kickasssrc
do
docker run -v $directory:$container_dir image:tag
done

docker ubuntu sourceing after starting image

I built myself an image for ROS. I run it while mounting my original home on the host and some tricks to get graphics as well. After starting the shell inside docker I always need to execute two source commands. One of the files to be sourced are actually inside the container, but the other resides in my home, which only gets mounted on starting the container. I would have these two files sourced automatically.
I tried adding
RUN bash -c "source /opt/ros/indigo/setup.bash"
to the image file, but this did not actually source it. Using CMD instead of run didn't drop me into the container's shell (I assume it finished executing source and then exited?). I don't even have an idea how to source the file that is only available after startup. What would I need to do?
TL;DR: you need to perform this step as part of your CMD or ENTRYPOINT, and for something like a source command, you need a step after that in the shell to run your app, or whatever shell you'd like. If you just want a bash shell as your command, then put your source command inside something like your .bashrc file. Or you can run something like:
bash -c "source /opt/ros/indigo/setup.bash && bash"
as your command.
One of the files to be sourced are actually inside the container, but the other resides in my home, which only gets mounted on starting the container.
...
I tried adding ... to the image file
Images are built using temporary containers that only see your Dockerfile instructions and the context sent with that to run the build. Containers use that built image and all of your configuration, like volumes, to run your application. There's a hard divider between those two steps, image build and container run, and your volumes are not available during that image build step.
Each of those RUN steps being performed for the image build are done in a temporary container that only stores the output of the filesystem when it's finished. Changes to your environment, a cd into another directory, spawned processes or services in the background, or anything else not written to the filesystem when the command spawned by RUN exits, will be lost. This is one reason you will see commands chained together in a single long RUN command, and it's why you have ENV and WORKDIR commands in the Dockerfile.

Resources