See image generated in docker - docker

I created a Docker like:
FROM rikorose/gcc-cmake
RUN git clone https://github.com/hect1995/UBIMET_Challenge.git
WORKDIR /UBIMET_Challenge
RUN mkdir build
WORKDIR build
#RUN apt-get update && apt-get -y install cmake=3.13.1-1ubuntu3 protobuf-compiler
RUN cmake ..
RUN make
Afterwards I do:
docker build --tag trial .
docker run -t -i trial /bin/bash
Then I run an executable that saves a .png file inside the container.
How can I visualize the image?

You can execute something inside the container.
To see all containers you can run docker ps --all.
To execute something inside container you can run docker exec <container id> command.
Otherwise you can copy files from container to host, with docker cp <container id>:/file-path ~/target/file-path

Please mount a localhost volume(directory) with container volume(directory) in where you are saving your images.
now all of your images saved in container directory will be available in host or localhost mount directory. From there you can visualize or download to another machine.
Please follow this
docker run --rm -d -v host_volume_or-directory:container_volume_direcotory trial
docker exec -it container_name /bin/bash

Related

In Docker while binding host directory with container directory I am facing a problem

I am trying to bindmount a directory form docker container to my host directory called /home, the docker container directory which I am trying to sync is named as /test and it contains a file called new.txt.
My Dockerfile is in /home/sampledocker1 directory. Its contents are as follows:
FROM ubuntu:18.04
RUN ["/bin/bash", "-c", "mkdir test"]
COPY new.txt test
Here, local file new.txt available in current path.
I executed the below commands first I built the docker image and started the container as follows:
docker build -t sample1:latest . # image is created properly
docker run -t -d -v /home:/test sample1:latest /bin/bash
After creating container with mount option, I am expecting that the file new.txt in test folder of container would appear in my /home directory but it did not.
Here bindmount is not happening properly.
By running -v option you actually override directory that already exists in the docker file.
If you run:
docker run -ti sample1:latest /bin/bash
You will find /test/new.txt file because it is added to the image layer with COPY command on the Dockerfile.
If you run:
docker run -ti -v /home:/test sample1:latest /bin/bash
You will find the contents of your computers /home directory in the /test of the docker container, because -v (mouted volume) overrides original image layer created with the COPY command on the Dockerfile.
THE SUGGESTION: Remove both COPY and mkdir commands from your Dockerfile:
FROM ubuntu:18.04
# Nothing at all
And mount your current directory with your docker run command:
docker run -ti -v $(pwd):/test sample1:latest /bin/bash
Since your Dockerfile is empty, equivalent command is just running ubuntu:18:04 image:
docker run -ti -v $(pwd):/test ubuntu:18.04 /bin/bash
p.s. I changed -d (detached) to -i(interactive) on the example to make sure that you enter docker image as soon as you run docker run command.

How can I access the /etc of a pulled servicemix image

I need to install a custom bundle in a dockerized servicemix image. To do so, I need to paste some files in the /etc directory of the servicemix image.
Could anyone help me doing this?
I've tried using the Dockerfile as follows:
But it simply doesn't work. I've looked through the documentation of the image, and the author tells me to use the command: docker run --volumes-from servicemix-data -it ubuntu bash and inspect the /servicemix, but it's empty.
Dockerfile:
FROM dskow/apache-servicemix
WORKDIR .
COPY ./docs /apache-servicemix/etc
...
Command suggested by the author:
docker run --volumes-from servicemix-data -it ubuntu bash
I was unfamiliar with this approach but, having looked at the source (link), I think this is what you want to do:
Create a container called servicemix-data that will become your volume:
docker run --name servicemix-data -v /servicemix busybox
Confirm this worked:
docker container ls --format="{{.ID}}\t{{.Names}}" --all
42b3bc4dbedf servicemix-data
...
Then you want to copy the files into this container:
docker cp ./docs servicemix-data:/etc
Finally, run servicemix using this container (with your files) as the source for its data:
docker run \
--detach \
--name=servicemix \
--volumes-from=servicemix-data \
dskow/apache-servicemix
HTH!
Changes in the container will be lost until it is committed back to the image.
You can use this docker file https://hub.docker.com/r/mkroli/servicemix/dockerfile and your copy statement just before the ENTRYPOINT.
COPY ./docs /opt/apache-servicemix/etc

Running multiple commands after docker create

I want to make a script run a series of commands in a Docker container and then copy a file out. If I use docker run to do this, I don't get back the container ID, which I would need for the docker cp. (I could try and hack it out of docker ps, but that seems risky.)
It seems that I should be able to
Create the container with docker create (which returns the container ID).
Run the commands.
Copy the file out.
But I don't know how to get step 2. to work. docker exec only works on running containers...
If i understood your question correctly, all you need is docker "run exec & cp" -
For example -
Create container with a name --name with docker run -
$ docker run --name bang -dit alpine
Run few commands using exec -
$ docker exec -it bang sh -c "ls -l"
Copy a file using docker cp -
$ docker cp bang:/etc/hosts ./
Stop the container using docker stop -
$ docker stop bang
All you really need is Dockerfile and then build the image from it and run the container using the newly built image. For more information u can refer to
this
A "standard" content of a dockerfile might be something like below:
#Download base image ubuntu 16.04
FROM ubuntu:16.04
# Update Ubuntu Software repository
RUN apt-get update
# Install nginx, php-fpm and supervisord from ubuntu repository
RUN apt-get install -y nginx php7.0-fpm supervisor && \
rm -rf /var/lib/apt/lists/*
#Define the ENV variable
ENV nginx_vhost /etc/nginx/sites-available/default
ENV php_conf /etc/php/7.0/fpm/php.ini
ENV nginx_conf /etc/nginx/nginx.conf
ENV supervisor_conf /etc/supervisor/supervisord.conf
#Copy supervisor configuration
COPY supervisord.conf ${supervisor_conf}
# Configure Services and Port
COPY start.sh /start.sh
CMD ["./start.sh"]
EXPOSE 80 443

How to write docker file to run a docker run command inside an image

I have a shell script which creates and executes docker containers using docker run command. I want to keep this script in a docker image and want to run this shell script. I know that we cannot run docker inside a container. Is it possible to create a docker file to achieve this?
Dockerfile:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y vim-gnome curl
RUN curl -L https://raw.githubusercontent.com/xyz/abx/test/testing/testing_docker.sh -o testing_docker.sh
RUN chmod +x testing_docker.sh
CMD ["./testing_docker.sh"]
testing_docker.sh:
docker run -it docker info (sample command)

How to COPY files of current directory to folder in Dockerfile

I'm trying to create a Dockerfile that copies all the files in the currently directory to a specific folder.
Currently I have
COPY . /this/folder
I'm unable to check the results of this command, as my container closes nearly immediately after I run it. Is there a better way to test if the command is working?
you can start a container and check.
$ docker run -ti --rm <DOCKER_IMAGE> sh
$ ls -l /this/folder
If your docker image has ENTRYPOINT setting, then run below command:
$ docker run -ti --rm --entrypoint sh <DOCKER_IMAGE>
$ ls -l /this/folder
If it is only for testing, include the below command in your docker file:
RUN cd /this/folder && ls
This will list the directory contents while docker build

Resources