Copy files from container to local using shell script - docker

I m trying to write a shell script that builds/runs containers and then copies files from docker container to host.
docker build . -t container:latest
docker run -t -d container /bin/bash
docker cp container_id:/xyz/xyz.txt /tmp
How can I capture the container id from the build and then use it within the shell script? Thanks for your help.

The first option would be to simply store the container ID in a variable.
docker build . -t container:latest
container_id="$(docker run -t -d container /bin/bash)"
docker cp "$container_id":/xyz/xyz.txt /tmp
Docker also allows you to specify a container name.
docker build . -t container:latest
docker run -t --name NAME -d container /bin/bash
docker cp NAME:/xyz/xyz.txt /tmp

Related

In Docker while binding host directory with container directory I am facing a problem

I am trying to bindmount a directory form docker container to my host directory called /home, the docker container directory which I am trying to sync is named as /test and it contains a file called new.txt.
My Dockerfile is in /home/sampledocker1 directory. Its contents are as follows:
FROM ubuntu:18.04
RUN ["/bin/bash", "-c", "mkdir test"]
COPY new.txt test
Here, local file new.txt available in current path.
I executed the below commands first I built the docker image and started the container as follows:
docker build -t sample1:latest . # image is created properly
docker run -t -d -v /home:/test sample1:latest /bin/bash
After creating container with mount option, I am expecting that the file new.txt in test folder of container would appear in my /home directory but it did not.
Here bindmount is not happening properly.
By running -v option you actually override directory that already exists in the docker file.
If you run:
docker run -ti sample1:latest /bin/bash
You will find /test/new.txt file because it is added to the image layer with COPY command on the Dockerfile.
If you run:
docker run -ti -v /home:/test sample1:latest /bin/bash
You will find the contents of your computers /home directory in the /test of the docker container, because -v (mouted volume) overrides original image layer created with the COPY command on the Dockerfile.
THE SUGGESTION: Remove both COPY and mkdir commands from your Dockerfile:
FROM ubuntu:18.04
# Nothing at all
And mount your current directory with your docker run command:
docker run -ti -v $(pwd):/test sample1:latest /bin/bash
Since your Dockerfile is empty, equivalent command is just running ubuntu:18:04 image:
docker run -ti -v $(pwd):/test ubuntu:18.04 /bin/bash
p.s. I changed -d (detached) to -i(interactive) on the example to make sure that you enter docker image as soon as you run docker run command.

while starting a docker container I have to execute a script inside docker container

while starting a docker container I have to execute a script inside docker container. Can I do it using docker run command or docker start command mentioning the path in docker? I know I have to use CMD in docker file but dockerfile is not present
Have you tried
docker run -it <image-name> bash "command-to-execute"
To enter a running Docker container (get a Bash prompt inside the container), please run the following:
docker container exec -it <container_id> /bin/bash
You can get the container_id by listing running Docker containers with:
docker container ps -a or docker ps -a
docker run --name TEST -d image sh -c " CMD "
in CMD section you can give the path of shell script

Where is the file I mounted at run time to Docker?

I mounted my secret file secret.json at runtime to a local docker, and while it works, I don't seems to find this volume anywhere.
My docker file looks like this and has no reference to secret:
RUN mkdir ./app
ADD src/python ./app/src/python
ENTRYPOINT ["python"]
Then I ran
docker build -t {MY_IMAGE_NAME} .
docker run -t -v $PATH_TO_SECRET_FILE/:/secrets/secret.json \
-e MY_CREDENTIALS=/secrets/secret.json \
{MY_IMAGE_NAME} ./app/src/python/runner.py
This runs successfully locally but when I do
docker run --entrypoint "ls" {MY_IMAGE_NAME}
I don't see the volume secrets.
Also, if I run
docker volume ls
it doesn't have anything that looks like secrets.
Without environment variable MY_CREDENTIALS the script won't run. So I am sure the secret file is mounted somewhere, but can't figure out where it is. Any idea?
You are actually creating two separate containers with the commands you are running. The first docker run command creates a container from the image you have built with the volume mounted and then the second command creates a new container from the same image but without any volumes (as you don't define any in your command)
I'd suggest you give your container a name like so
docker run -t -v $PATH_TO_SECRET_FILE/:/secrets/secret.json \
-e MY_CREDENTIALS=/secrets/secret.json \
--name my_container {MY_IMAGE_NAME} ./app/src/python/runner.py
and then run exec on that container
docker exec -it my_container sh

Can you have a docker copy in the same line as docker run

I am trying to pass a file while calling docker run. Is it possible to also execute docker cp inside a docker run?
I'm not sure if this solves your issue, but you can accomplish this in 3 parts, you can create the container first then copy the file, and finally start the image.
host> docker create --name test -it ubuntu:latest bash
7a986aedcd886c6f7e7c65dee8617c05af3e63824e44274e52bc0b4036d81e43
host> docker cp SampleFile test:/SampleFile
host> docker start -i test
docker> cat /SampleFile

`docker cp` doesn't copy file into container

I have a dockerized project. I build, copy a file from the host system into the docker container, and then shell into the container to find that the file isn't there. How is docker cp supposed to work?
$ docker build -q -t foo .
Sending build context to Docker daemon 64 kB
Step 0 : FROM ubuntu:14.04
---> 2d24f826cb16
Step 1 : MAINTAINER Brandon Istenes <redacted#email.com>
---> Using cache
---> f53a163ef8ce
Step 2 : RUN apt-get update
---> Using cache
---> 32b06b4131d4
Successfully built 32b06b4131d4
$ docker cp ~/.ssh/known_hosts foo:/root/.ssh/known_hosts
$ docker run -it foo bash
WARNING: Your kernel does not support memory swappiness capabilities, memory swappiness discarded.
root#421fc2866b14:/# ls /root/.ssh
root#421fc2866b14:/#
So there was some mix-up with the names of images and containers. Obviously, the cp operation was acting on a different container than I brought up with the run command. In any case, the correct procedure is:
# Build the image, call it foo-build
docker build -q -t foo-build .
# Create a container from the image called foo-tmp
docker create --name foo-tmp foo-build
# Run the copy command on the container
docker cp /src/path foo-tmp:/dest/path
# Commit the container as a new image
docker commit foo-tmp foo
# The new image will have the files
docker run foo ls /dest
You need to docker exec to get into your container, your command creates a new container.
I have this alias to get into the last created container with the shell of the container
alias exec_last='docker exec -it $(docker ps -lq) $(docker inspect -f {{'.Path'}} $(docker ps -lq))'
What docker version are you using? As per Docker 1.8 cp supports copying from host to container:
• Copy files from host to container: docker cp used to only copy files from a container out to the host, but it now works the other way round: docker cp foo.txt mycontainer:/foo.txt
Please note the difference between images and containers. If you want that every container that you create from that Dockerfile contains that file (even if you don't copy afterward) you can use COPY and ADD in the Dockerfile. If you want to copy the file after the container is created from the image, you can use the docker cp command in version 1.8.

Resources