Docker: Copying files from Docker container to host - docker
I'm thinking of using Docker to build my dependencies on a Continuous Integration (CI) server, so that I don't have to install all the runtimes and libraries on the agents themselves.
To achieve this I would need to copy the build artifacts that are built inside the container back into the host. Is that possible?
In order to copy a file from a container to the host, you can use the command
docker cp <containerId>:/file/path/within/container /host/path/target
Here's an example:
$ sudo docker cp goofy_roentgen:/out_read.jpg .
Here goofy_roentgen is the container name I got from the following command:
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1b4ad9311e93 bamos/openface "/bin/bash" 33 minutes ago Up 33 minutes 0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp goofy_roentgen
You can also use (part of) the Container ID. The following command is equivalent to the first
$ sudo docker cp 1b4a:/out_read.jpg .
You do not need to use docker run.
You can do it with docker create.
From the docs:
The docker create command creates a writeable container layer over the specified image and prepares it for running the specified command. The container ID is then printed to STDOUT. This is similar to docker run -d except the container is never started.
So, you can do:
docker create --name dummy IMAGE_NAME
docker cp dummy:/path/to/file /dest/to/file
docker rm -f dummy
Here, you never start the container. That looked beneficial to me.
Mount a "volume" and copy the artifacts into there:
mkdir artifacts
docker run -i -v ${PWD}/artifacts:/artifacts ubuntu:14.04 sh << COMMANDS
# ... build software here ...
cp <artifact> /artifacts
# ... copy more artifacts into `/artifacts` ...
COMMANDS
Then when the build finishes and the container is no longer running, it has already copied the artifacts from the build into the artifacts directory on the host.
Edit
Caveat: When you do this, you may run into problems with the user id of the docker user matching the user id of the current running user. That is, the files in /artifacts will be shown as owned by the user with the UID of the user used inside the docker container. A way around this may be to use the calling user's UID:
docker run -i -v ${PWD}:/working_dir -w /working_dir -u $(id -u) \
ubuntu:14.04 sh << COMMANDS
# Since $(id -u) owns /working_dir, you should be okay running commands here
# and having them work. Then copy stuff into /working_dir/artifacts .
COMMANDS
docker cp containerId:source_path destination_path
containerId can be obtained from the command docker ps -a
source path should be absolute. for example, if the application/service directory starts from the app in your docker container the path would be /app/some_directory/file
example : docker cp d86844abc129:/app/server/output/server-test.png C:/Users/someone/Desktop/output
TLDR;
$ docker run --rm -iv${PWD}:/host-volume my-image sh -s <<EOF
chown $(id -u):$(id -g) my-artifact.tar.xz
cp -a my-artifact.tar.xz /host-volume
EOF
Description
docker run with a host volume, chown the artifact, cp the artifact to the host volume:
$ docker build -t my-image - <<EOF
> FROM busybox
> WORKDIR /workdir
> RUN touch foo.txt bar.txt qux.txt
> EOF
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM busybox
---> 00f017a8c2a6
Step 2/3 : WORKDIR /workdir
---> Using cache
---> 36151d97f2c9
Step 3/3 : RUN touch foo.txt bar.txt qux.txt
---> Running in a657ed4f5cab
---> 4dd197569e44
Removing intermediate container a657ed4f5cab
Successfully built 4dd197569e44
$ docker run --rm -iv${PWD}:/host-volume my-image sh -s <<EOF
chown -v $(id -u):$(id -g) *.txt
cp -va *.txt /host-volume
EOF
changed ownership of '/host-volume/bar.txt' to 10335:11111
changed ownership of '/host-volume/qux.txt' to 10335:11111
changed ownership of '/host-volume/foo.txt' to 10335:11111
'bar.txt' -> '/host-volume/bar.txt'
'foo.txt' -> '/host-volume/foo.txt'
'qux.txt' -> '/host-volume/qux.txt'
$ ls -n
total 0
-rw-r--r-- 1 10335 11111 0 May 7 18:22 bar.txt
-rw-r--r-- 1 10335 11111 0 May 7 18:22 foo.txt
-rw-r--r-- 1 10335 11111 0 May 7 18:22 qux.txt
This trick works because the chown invocation within the heredoc the takes $(id -u):$(id -g) values from outside the running container; i.e., the docker host.
The benefits are:
you don't have to docker container run --name or docker container create --name before
you don't have to docker container rm after
From DockerContainer To LocalMachine
$docker cp containerId:/sourceFilePath/someFile.txt C:/localMachineDestinationFolder
From LocalMachine To DockerContainer
$docker cp C:/localMachineSourceFolder/someFile.txt containerId:/containerDestinationFolder
Mount a volume, copy the artifacts, adjust owner id and group id:
mkdir artifacts
docker run -i --rm -v ${PWD}/artifacts:/mnt/artifacts centos:6 /bin/bash << COMMANDS
ls -la > /mnt/artifacts/ls.txt
echo Changing owner from \$(id -u):\$(id -g) to $(id -u):$(id -g)
chown -R $(id -u):$(id -g) /mnt/artifacts
COMMANDS
EDIT: Note that some of the commands like $(id -u) are backslashed and will therefore be processed within the container, while the ones that are not backslashed will be processed by the shell being run in the host machine BEFORE the commands are sent to the container.
Most of the answers do not indicate that the container must run before docker cp will work:
docker build -t IMAGE_TAG .
docker run -d IMAGE_TAG
CONTAINER_ID=$(docker ps -alq)
# If you do not know the exact file name, you'll need to run "ls"
# FILE=$(docker exec CONTAINER_ID sh -c "ls /path/*.zip")
docker cp $CONTAINER_ID:/path/to/file .
docker stop $CONTAINER_ID
If you don't have a running container, just an image, and assuming you want to copy just a text file, you could do something like this:
docker run the-image cat path/to/container/file.txt > path/to/host/file.txt
With the release of Docker 19.03, you can skip creating the container and even building an image. There's an option with BuildKit based builds to change the output destination. You can use this to write the results of the build to your local directory rather than into an image. E.g. here's a build of a go binary:
$ ls
Dockerfile go.mod main.go
$ cat Dockerfile
FROM golang:1.12-alpine as dev
RUN apk add --no-cache git ca-certificates
RUN adduser -D appuser
WORKDIR /src
COPY . /src/
CMD CGO_ENABLED=0 go build -o app . && ./app
FROM dev as build
RUN CGO_ENABLED=0 go build -o app .
USER appuser
CMD [ "./app" ]
FROM scratch as release
COPY --from=build /etc/passwd /etc/group /etc/
COPY --from=build /src/app /app
USER appuser
CMD [ "/app" ]
FROM scratch as artifact
COPY --from=build /src/app /app
FROM release
From the above Dockerfile, I'm building the artifact stage that only includes the files I want to export. And the newly introduced --output flag lets me write those to a local directory instead of an image. This needs to be performed with the BuildKit engine that ships with 19.03:
$ DOCKER_BUILDKIT=1 docker build --target artifact --output type=local,dest=. .
[+] Building 43.5s (12/12) FINISHED
=> [internal] load build definition from Dockerfile 0.7s
=> => transferring dockerfile: 572B 0.0s
=> [internal] load .dockerignore 0.5s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/golang:1.12-alpine 0.9s
=> [dev 1/5] FROM docker.io/library/golang:1.12-alpine#sha256:50deab916cce57a792cd88af3479d127a9ec571692a1a9c22109532c0d0499a0 22.5s
=> => resolve docker.io/library/golang:1.12-alpine#sha256:50deab916cce57a792cd88af3479d127a9ec571692a1a9c22109532c0d0499a0 0.0s
=> => sha256:1ec62c064901392a6722bb47a377c01a381f4482b1ce094b6d28682b6b6279fd 155B / 155B 0.3s
=> => sha256:50deab916cce57a792cd88af3479d127a9ec571692a1a9c22109532c0d0499a0 1.65kB / 1.65kB 0.0s
=> => sha256:2ecd820bec717ec5a8cdc2a1ae04887ed9b46c996f515abc481cac43a12628da 1.36kB / 1.36kB 0.0s
=> => sha256:6a17089e5a3afc489e5b6c118cd46eda66b2d5361f309d8d4b0dcac268a47b13 3.81kB / 3.81kB 0.0s
=> => sha256:89d9c30c1d48bac627e5c6cb0d1ed1eec28e7dbdfbcc04712e4c79c0f83faf17 2.79MB / 2.79MB 0.6s
=> => sha256:8ef94372a977c02d425f12c8cbda5416e372b7a869a6c2b20342c589dba3eae5 301.72kB / 301.72kB 0.4s
=> => sha256:025f14a3d97f92c07a07446e7ea8933b86068d00da9e252cf3277e9347b6fe69 125.33MB / 125.33MB 13.7s
=> => sha256:7047deb9704134ff71c99791be3f6474bb45bc3971dde9257ef9186d7cb156db 125B / 125B 0.8s
=> => extracting sha256:89d9c30c1d48bac627e5c6cb0d1ed1eec28e7dbdfbcc04712e4c79c0f83faf17 0.2s
=> => extracting sha256:8ef94372a977c02d425f12c8cbda5416e372b7a869a6c2b20342c589dba3eae5 0.1s
=> => extracting sha256:1ec62c064901392a6722bb47a377c01a381f4482b1ce094b6d28682b6b6279fd 0.0s
=> => extracting sha256:025f14a3d97f92c07a07446e7ea8933b86068d00da9e252cf3277e9347b6fe69 5.2s
=> => extracting sha256:7047deb9704134ff71c99791be3f6474bb45bc3971dde9257ef9186d7cb156db 0.0s
=> [internal] load build context 0.3s
=> => transferring context: 2.11kB 0.0s
=> [dev 2/5] RUN apk add --no-cache git ca-certificates 3.8s
=> [dev 3/5] RUN adduser -D appuser 1.7s
=> [dev 4/5] WORKDIR /src 0.5s
=> [dev 5/5] COPY . /src/ 0.4s
=> [build 1/1] RUN CGO_ENABLED=0 go build -o app . 11.6s
=> [artifact 1/1] COPY --from=build /src/app /app 0.5s
=> exporting to client 0.1s
=> => copying files 10.00MB 0.1s
After the build was complete the app binary was exported:
$ ls
Dockerfile app go.mod main.go
$ ./app
Ready to receive requests on port 8080
Docker has other options to the --output flag documented in their upstream BuildKit repo: https://github.com/moby/buildkit#output
For anyone trying to do this with a MySQL container and storing the volumes locally on your machine. I used the syntax that was provided in the top rated reply to this question. But had to use a specific path that's specific to MySQL
docker cp containerIdHere:/var/lib/mysql pathToYourLocalMachineHere
I am posting this for anyone that is using Docker for Mac.
This is what worked for me:
$ mkdir mybackup # local directory on Mac
$ docker run --rm --volumes-from <containerid> \
-v `pwd`/mybackup:/backup \
busybox \
cp /data/mydata.txt /backup
Note that when I mount using -v that backup directory is automatically created.
I hope this is useful to someone someday. :)
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH to copy from the container to the host machine.
e.g. docker cp test:/opt/file1 /etc/
For Vice-Versa:
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH to copy from host machine to container.
Another good option is first build the container and then run it using the -c flag with the shell interpreter to execute some commads
docker run --rm -i -v <host_path>:<container_path> <mydockerimage> /bin/sh -c "cp -r /tmp/homework/* <container_path>"
The above command does this:
-i = run the container in interactive mode
--rm = removed the container after the execution.
-v = shared a folder as volume from your host path to the container path.
Finally, the /bin/sh -c lets you introduce a command as a parameter and that command will copy your homework files to the container path.
I hope this additional answer may help you
docker run -dit --rm IMAGE
docker cp CONTAINER:SRC_PATH DEST_PATH
https://docs.docker.com/engine/reference/commandline/run/
https://docs.docker.com/engine/reference/commandline/cp/
I used PowerShell (Admin) with this command.
docker cp {container id}:{container path}/error.html C:\\error.html
Example
docker cp ff3a6608467d:/var/www/app/error.html C:\\error.html
sudo docker cp <running_container_id>:<full_file_path_in_container> <path_on_local_machine>
Example :
sudo docker cp d8a17dfc455f:/tests/reports /home/acbcb/Documents/abc
If you just want to pull a file from an image (instead of a running container) you can do this:
docker run --rm <image> cat <source> > <local_dest>
This will bring up the container, write the new file, then remove the container. One drawback, however, is that the file permissions and modified date will not be preserved.
As a more general solution, there's a CloudBees plugin for Jenkins to build inside a Docker container. You can select an image to use from a Docker registry or define a Dockerfile to build and use.
It'll mount the workspace into the container as a volume (with appropriate user), set it as your working directory, do whatever commands you request (inside the container).
You can also use the docker-workflow plugin (if you prefer code over UI) to do this, with the image.inside() {} command.
Basically all of this, baked into your CI/CD server and then some.
The easiest way is to just create a container, get the ID, and then copy from there
IMAGE_TAG=my-image-tag
container=$(docker create ${IMAGE_TAG})
docker cp ${container}:/src-path ./dst-path/
Create a data directory on the host system (outside the container) and mount this to a directory visible from inside the container. This places the files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files
docker run -d -v /path/to/Local_host_dir:/path/to/docker_dir docker_image:tag
This can also be done in the SDK for example python. If you already have a container built you can lookup the name via console ( docker ps -a ) name seems to be some concatenation of a scientist and an adjective (i.e. "relaxed_pasteur").
Check out help(container.get_archive) :
Help on method get_archive in module docker.models.containers:
get_archive(path, chunk_size=2097152) method of docker.models.containers.Container instance
Retrieve a file or folder from the container in the form of a tar
archive.
Args:
path (str): Path to the file or folder to retrieve
chunk_size (int): The number of bytes returned by each iteration
of the generator. If ``None``, data will be streamed as it is
received. Default: 2 MB
Returns:
(tuple): First element is a raw tar data stream. Second element is
a dict containing ``stat`` information on the specified ``path``.
Raises:
:py:class:`docker.errors.APIError`
If the server returns an error.
Example:
>>> f = open('./sh_bin.tar', 'wb')
>>> bits, stat = container.get_archive('/bin/sh')
>>> print(stat)
{'name': 'sh', 'size': 1075464, 'mode': 493,
'mtime': '2018-10-01T15:37:48-07:00', 'linkTarget': ''}
>>> for chunk in bits:
... f.write(chunk)
>>> f.close()
So then something like this will pull out from the specified path ( /output) in the container to your host machine and unpack the tar.
import docker
import os
import tarfile
# Docker client
client = docker.from_env()
#container object
container = client.containers.get("relaxed_pasteur")
#setup tar to write bits to
f = open(os.path.join(os.getcwd(),"output.tar"),"wb")
#get the bits
bits, stat = container.get_archive('/output')
#write the bits
for chunk in bits:
f.write(chunk)
f.close()
#unpack
tar = tarfile.open("output.tar")
tar.extractall()
tar.close()
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH
The DEST_PATH must be pre-exist
If you use podman/buildah1, it offers greater flexibility for copying files from a container to the host because it allows you to mount the container.
After you create the container as in this answer
podman create --name dummy IMAGE_NAME
Now we can mount the entire container, and then we use the cp utility found in almost every linux box to copy the contents of /etc/foobar from the container (dummy), into /tmp on our host machine. All this can be done rootless. Observe:
$ podman unshare -- bash -c '
mnt=$(podman mount dummy)
cp -R ${mnt}/etc/foobar /tmp
podman umount dummy
'
1. podman uses buildah internally, and they also share almost the same api
if you need a small file, you can use this section
Docker container inside
docker run -it -p 4122:4122 <container_ID>
nc -l -p 4122 < Output.txt
Host machine
nc 127.0.0.1 4122 > Output.txt
Create a path where you want to copy the file and then use:
docker run -d -v hostpath:dockerimag
You can use bind instead of volume if you want to mount only one folder, not create special storage for a container:
Build your image with tag :
docker build . -t <image>
Run your image and bind current $(pwd) directory where app.py stores and map it to /root/example/ inside your container.
docker run --mount type=bind,source="$(pwd)",target=/root/example/ <image> python app.py
Related
docker compose: create volume and copy files from another folder [duplicate]
I'm thinking of using Docker to build my dependencies on a Continuous Integration (CI) server, so that I don't have to install all the runtimes and libraries on the agents themselves. To achieve this I would need to copy the build artifacts that are built inside the container back into the host. Is that possible?
In order to copy a file from a container to the host, you can use the command docker cp <containerId>:/file/path/within/container /host/path/target Here's an example: $ sudo docker cp goofy_roentgen:/out_read.jpg . Here goofy_roentgen is the container name I got from the following command: $ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1b4ad9311e93 bamos/openface "/bin/bash" 33 minutes ago Up 33 minutes 0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp goofy_roentgen You can also use (part of) the Container ID. The following command is equivalent to the first $ sudo docker cp 1b4a:/out_read.jpg .
You do not need to use docker run. You can do it with docker create. From the docs: The docker create command creates a writeable container layer over the specified image and prepares it for running the specified command. The container ID is then printed to STDOUT. This is similar to docker run -d except the container is never started. So, you can do: docker create --name dummy IMAGE_NAME docker cp dummy:/path/to/file /dest/to/file docker rm -f dummy Here, you never start the container. That looked beneficial to me.
Mount a "volume" and copy the artifacts into there: mkdir artifacts docker run -i -v ${PWD}/artifacts:/artifacts ubuntu:14.04 sh << COMMANDS # ... build software here ... cp <artifact> /artifacts # ... copy more artifacts into `/artifacts` ... COMMANDS Then when the build finishes and the container is no longer running, it has already copied the artifacts from the build into the artifacts directory on the host. Edit Caveat: When you do this, you may run into problems with the user id of the docker user matching the user id of the current running user. That is, the files in /artifacts will be shown as owned by the user with the UID of the user used inside the docker container. A way around this may be to use the calling user's UID: docker run -i -v ${PWD}:/working_dir -w /working_dir -u $(id -u) \ ubuntu:14.04 sh << COMMANDS # Since $(id -u) owns /working_dir, you should be okay running commands here # and having them work. Then copy stuff into /working_dir/artifacts . COMMANDS
docker cp containerId:source_path destination_path containerId can be obtained from the command docker ps -a source path should be absolute. for example, if the application/service directory starts from the app in your docker container the path would be /app/some_directory/file example : docker cp d86844abc129:/app/server/output/server-test.png C:/Users/someone/Desktop/output
TLDR; $ docker run --rm -iv${PWD}:/host-volume my-image sh -s <<EOF chown $(id -u):$(id -g) my-artifact.tar.xz cp -a my-artifact.tar.xz /host-volume EOF Description docker run with a host volume, chown the artifact, cp the artifact to the host volume: $ docker build -t my-image - <<EOF > FROM busybox > WORKDIR /workdir > RUN touch foo.txt bar.txt qux.txt > EOF Sending build context to Docker daemon 2.048kB Step 1/3 : FROM busybox ---> 00f017a8c2a6 Step 2/3 : WORKDIR /workdir ---> Using cache ---> 36151d97f2c9 Step 3/3 : RUN touch foo.txt bar.txt qux.txt ---> Running in a657ed4f5cab ---> 4dd197569e44 Removing intermediate container a657ed4f5cab Successfully built 4dd197569e44 $ docker run --rm -iv${PWD}:/host-volume my-image sh -s <<EOF chown -v $(id -u):$(id -g) *.txt cp -va *.txt /host-volume EOF changed ownership of '/host-volume/bar.txt' to 10335:11111 changed ownership of '/host-volume/qux.txt' to 10335:11111 changed ownership of '/host-volume/foo.txt' to 10335:11111 'bar.txt' -> '/host-volume/bar.txt' 'foo.txt' -> '/host-volume/foo.txt' 'qux.txt' -> '/host-volume/qux.txt' $ ls -n total 0 -rw-r--r-- 1 10335 11111 0 May 7 18:22 bar.txt -rw-r--r-- 1 10335 11111 0 May 7 18:22 foo.txt -rw-r--r-- 1 10335 11111 0 May 7 18:22 qux.txt This trick works because the chown invocation within the heredoc the takes $(id -u):$(id -g) values from outside the running container; i.e., the docker host. The benefits are: you don't have to docker container run --name or docker container create --name before you don't have to docker container rm after
From DockerContainer To LocalMachine $docker cp containerId:/sourceFilePath/someFile.txt C:/localMachineDestinationFolder From LocalMachine To DockerContainer $docker cp C:/localMachineSourceFolder/someFile.txt containerId:/containerDestinationFolder
Mount a volume, copy the artifacts, adjust owner id and group id: mkdir artifacts docker run -i --rm -v ${PWD}/artifacts:/mnt/artifacts centos:6 /bin/bash << COMMANDS ls -la > /mnt/artifacts/ls.txt echo Changing owner from \$(id -u):\$(id -g) to $(id -u):$(id -g) chown -R $(id -u):$(id -g) /mnt/artifacts COMMANDS EDIT: Note that some of the commands like $(id -u) are backslashed and will therefore be processed within the container, while the ones that are not backslashed will be processed by the shell being run in the host machine BEFORE the commands are sent to the container.
Most of the answers do not indicate that the container must run before docker cp will work: docker build -t IMAGE_TAG . docker run -d IMAGE_TAG CONTAINER_ID=$(docker ps -alq) # If you do not know the exact file name, you'll need to run "ls" # FILE=$(docker exec CONTAINER_ID sh -c "ls /path/*.zip") docker cp $CONTAINER_ID:/path/to/file . docker stop $CONTAINER_ID
If you don't have a running container, just an image, and assuming you want to copy just a text file, you could do something like this: docker run the-image cat path/to/container/file.txt > path/to/host/file.txt
With the release of Docker 19.03, you can skip creating the container and even building an image. There's an option with BuildKit based builds to change the output destination. You can use this to write the results of the build to your local directory rather than into an image. E.g. here's a build of a go binary: $ ls Dockerfile go.mod main.go $ cat Dockerfile FROM golang:1.12-alpine as dev RUN apk add --no-cache git ca-certificates RUN adduser -D appuser WORKDIR /src COPY . /src/ CMD CGO_ENABLED=0 go build -o app . && ./app FROM dev as build RUN CGO_ENABLED=0 go build -o app . USER appuser CMD [ "./app" ] FROM scratch as release COPY --from=build /etc/passwd /etc/group /etc/ COPY --from=build /src/app /app USER appuser CMD [ "/app" ] FROM scratch as artifact COPY --from=build /src/app /app FROM release From the above Dockerfile, I'm building the artifact stage that only includes the files I want to export. And the newly introduced --output flag lets me write those to a local directory instead of an image. This needs to be performed with the BuildKit engine that ships with 19.03: $ DOCKER_BUILDKIT=1 docker build --target artifact --output type=local,dest=. . [+] Building 43.5s (12/12) FINISHED => [internal] load build definition from Dockerfile 0.7s => => transferring dockerfile: 572B 0.0s => [internal] load .dockerignore 0.5s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/golang:1.12-alpine 0.9s => [dev 1/5] FROM docker.io/library/golang:1.12-alpine#sha256:50deab916cce57a792cd88af3479d127a9ec571692a1a9c22109532c0d0499a0 22.5s => => resolve docker.io/library/golang:1.12-alpine#sha256:50deab916cce57a792cd88af3479d127a9ec571692a1a9c22109532c0d0499a0 0.0s => => sha256:1ec62c064901392a6722bb47a377c01a381f4482b1ce094b6d28682b6b6279fd 155B / 155B 0.3s => => sha256:50deab916cce57a792cd88af3479d127a9ec571692a1a9c22109532c0d0499a0 1.65kB / 1.65kB 0.0s => => sha256:2ecd820bec717ec5a8cdc2a1ae04887ed9b46c996f515abc481cac43a12628da 1.36kB / 1.36kB 0.0s => => sha256:6a17089e5a3afc489e5b6c118cd46eda66b2d5361f309d8d4b0dcac268a47b13 3.81kB / 3.81kB 0.0s => => sha256:89d9c30c1d48bac627e5c6cb0d1ed1eec28e7dbdfbcc04712e4c79c0f83faf17 2.79MB / 2.79MB 0.6s => => sha256:8ef94372a977c02d425f12c8cbda5416e372b7a869a6c2b20342c589dba3eae5 301.72kB / 301.72kB 0.4s => => sha256:025f14a3d97f92c07a07446e7ea8933b86068d00da9e252cf3277e9347b6fe69 125.33MB / 125.33MB 13.7s => => sha256:7047deb9704134ff71c99791be3f6474bb45bc3971dde9257ef9186d7cb156db 125B / 125B 0.8s => => extracting sha256:89d9c30c1d48bac627e5c6cb0d1ed1eec28e7dbdfbcc04712e4c79c0f83faf17 0.2s => => extracting sha256:8ef94372a977c02d425f12c8cbda5416e372b7a869a6c2b20342c589dba3eae5 0.1s => => extracting sha256:1ec62c064901392a6722bb47a377c01a381f4482b1ce094b6d28682b6b6279fd 0.0s => => extracting sha256:025f14a3d97f92c07a07446e7ea8933b86068d00da9e252cf3277e9347b6fe69 5.2s => => extracting sha256:7047deb9704134ff71c99791be3f6474bb45bc3971dde9257ef9186d7cb156db 0.0s => [internal] load build context 0.3s => => transferring context: 2.11kB 0.0s => [dev 2/5] RUN apk add --no-cache git ca-certificates 3.8s => [dev 3/5] RUN adduser -D appuser 1.7s => [dev 4/5] WORKDIR /src 0.5s => [dev 5/5] COPY . /src/ 0.4s => [build 1/1] RUN CGO_ENABLED=0 go build -o app . 11.6s => [artifact 1/1] COPY --from=build /src/app /app 0.5s => exporting to client 0.1s => => copying files 10.00MB 0.1s After the build was complete the app binary was exported: $ ls Dockerfile app go.mod main.go $ ./app Ready to receive requests on port 8080 Docker has other options to the --output flag documented in their upstream BuildKit repo: https://github.com/moby/buildkit#output
For anyone trying to do this with a MySQL container and storing the volumes locally on your machine. I used the syntax that was provided in the top rated reply to this question. But had to use a specific path that's specific to MySQL docker cp containerIdHere:/var/lib/mysql pathToYourLocalMachineHere
I am posting this for anyone that is using Docker for Mac. This is what worked for me: $ mkdir mybackup # local directory on Mac $ docker run --rm --volumes-from <containerid> \ -v `pwd`/mybackup:/backup \ busybox \ cp /data/mydata.txt /backup Note that when I mount using -v that backup directory is automatically created. I hope this is useful to someone someday. :)
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH to copy from the container to the host machine. e.g. docker cp test:/opt/file1 /etc/ For Vice-Versa: docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH to copy from host machine to container.
Another good option is first build the container and then run it using the -c flag with the shell interpreter to execute some commads docker run --rm -i -v <host_path>:<container_path> <mydockerimage> /bin/sh -c "cp -r /tmp/homework/* <container_path>" The above command does this: -i = run the container in interactive mode --rm = removed the container after the execution. -v = shared a folder as volume from your host path to the container path. Finally, the /bin/sh -c lets you introduce a command as a parameter and that command will copy your homework files to the container path. I hope this additional answer may help you
docker run -dit --rm IMAGE docker cp CONTAINER:SRC_PATH DEST_PATH https://docs.docker.com/engine/reference/commandline/run/ https://docs.docker.com/engine/reference/commandline/cp/
I used PowerShell (Admin) with this command. docker cp {container id}:{container path}/error.html C:\\error.html Example docker cp ff3a6608467d:/var/www/app/error.html C:\\error.html
sudo docker cp <running_container_id>:<full_file_path_in_container> <path_on_local_machine> Example : sudo docker cp d8a17dfc455f:/tests/reports /home/acbcb/Documents/abc
If you just want to pull a file from an image (instead of a running container) you can do this: docker run --rm <image> cat <source> > <local_dest> This will bring up the container, write the new file, then remove the container. One drawback, however, is that the file permissions and modified date will not be preserved.
As a more general solution, there's a CloudBees plugin for Jenkins to build inside a Docker container. You can select an image to use from a Docker registry or define a Dockerfile to build and use. It'll mount the workspace into the container as a volume (with appropriate user), set it as your working directory, do whatever commands you request (inside the container). You can also use the docker-workflow plugin (if you prefer code over UI) to do this, with the image.inside() {} command. Basically all of this, baked into your CI/CD server and then some.
The easiest way is to just create a container, get the ID, and then copy from there IMAGE_TAG=my-image-tag container=$(docker create ${IMAGE_TAG}) docker cp ${container}:/src-path ./dst-path/
Create a data directory on the host system (outside the container) and mount this to a directory visible from inside the container. This places the files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files docker run -d -v /path/to/Local_host_dir:/path/to/docker_dir docker_image:tag
This can also be done in the SDK for example python. If you already have a container built you can lookup the name via console ( docker ps -a ) name seems to be some concatenation of a scientist and an adjective (i.e. "relaxed_pasteur"). Check out help(container.get_archive) : Help on method get_archive in module docker.models.containers: get_archive(path, chunk_size=2097152) method of docker.models.containers.Container instance Retrieve a file or folder from the container in the form of a tar archive. Args: path (str): Path to the file or folder to retrieve chunk_size (int): The number of bytes returned by each iteration of the generator. If ``None``, data will be streamed as it is received. Default: 2 MB Returns: (tuple): First element is a raw tar data stream. Second element is a dict containing ``stat`` information on the specified ``path``. Raises: :py:class:`docker.errors.APIError` If the server returns an error. Example: >>> f = open('./sh_bin.tar', 'wb') >>> bits, stat = container.get_archive('/bin/sh') >>> print(stat) {'name': 'sh', 'size': 1075464, 'mode': 493, 'mtime': '2018-10-01T15:37:48-07:00', 'linkTarget': ''} >>> for chunk in bits: ... f.write(chunk) >>> f.close() So then something like this will pull out from the specified path ( /output) in the container to your host machine and unpack the tar. import docker import os import tarfile # Docker client client = docker.from_env() #container object container = client.containers.get("relaxed_pasteur") #setup tar to write bits to f = open(os.path.join(os.getcwd(),"output.tar"),"wb") #get the bits bits, stat = container.get_archive('/output') #write the bits for chunk in bits: f.write(chunk) f.close() #unpack tar = tarfile.open("output.tar") tar.extractall() tar.close()
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH The DEST_PATH must be pre-exist
If you use podman/buildah1, it offers greater flexibility for copying files from a container to the host because it allows you to mount the container. After you create the container as in this answer podman create --name dummy IMAGE_NAME Now we can mount the entire container, and then we use the cp utility found in almost every linux box to copy the contents of /etc/foobar from the container (dummy), into /tmp on our host machine. All this can be done rootless. Observe: $ podman unshare -- bash -c ' mnt=$(podman mount dummy) cp -R ${mnt}/etc/foobar /tmp podman umount dummy ' 1. podman uses buildah internally, and they also share almost the same api
if you need a small file, you can use this section Docker container inside docker run -it -p 4122:4122 <container_ID> nc -l -p 4122 < Output.txt Host machine nc 127.0.0.1 4122 > Output.txt
Create a path where you want to copy the file and then use: docker run -d -v hostpath:dockerimag
You can use bind instead of volume if you want to mount only one folder, not create special storage for a container: Build your image with tag : docker build . -t <image> Run your image and bind current $(pwd) directory where app.py stores and map it to /root/example/ inside your container. docker run --mount type=bind,source="$(pwd)",target=/root/example/ <image> python app.py
how to override the files in docker container
I have below dockerfile: FROM node:16.7.0 ARG JS_FILE ENV JS_FILE=${JS_FILE:-"./sum.js"} ARG JS_TEST_FILE ENV JS_TEST_FILE=${JS_TEST_FILE:-"./sum.test.js"} WORKDIR /app # Copy the package.json to /app COPY ["package.json", "./"] # Copy source code into the image COPY ${JS_FILE} . COPY ${JS_TEST_FILE} . # Install dependencies (if any) in package.json RUN npm install CMD ["sh", "-c", "tail -f /dev/null"] after building the docker image, if I tried to run the image with the below command, then still could not see the updated files. docker run --env JS_FILE="./Scripts/updated_sum.js" --env JS_TEST_FILE="./Test/updated_sum.test.js" -it <image-name> I would like to see updated_sum.js and updated_sum.test.js in my container, however, I still see sum.js and sum.test.js. Is it possible to achieve this? This is my current folder/file structure: . -->Dockerfile -->package.json -->sum.js -->sum.test.js -->Test -->--->updated_sum.test.js -->Scripts -->--->updated_sum.js
Using Docker generally involves two phases. First, you compile your application into an image, and then you run a container based on that image. With the plain Docker CLI, these correspond to the docker build and docker run steps. docker build does everything in the Dockerfile, then stops; docker run starts from the fixed result of that and runs the image's CMD. So if you run docker build -t sum . The sum:latest image will have the sum.js and sum.test.js files, because that's what the Dockerfile COPYs in. You can then docker run --rm sum \ ls docker run --rm sum \ node ./sum.js to see and run the contents of the image. (Specifying the latter command as CMD would be a better practice.) You can run the command with different environment variables, but it won't change the files in the image: docker run --rm -e JS_FILE=missing.js sum ls # still only has sum.js docker run --rm -e JS_FILE=missing.js node missing.js # not found Instead you need to rebuild the image, using docker build --build-arg options to provide the values docker build \ --build-arg JS_FILE=./product.js \ --build-arg JS_TEST_FILE=./product.test.js \ -t product \ . docker run --rm product node ./product.js The extremely parametrizable Dockerfile you show here can be a little harder to work with than a single-purpose Dockerfile. I might create a separate Dockerfile per application: # Dockerfile.sum FROM node:16.7.0 WORKDIR /app COPY package*.json . RUN npm ci COPY sum.js sum.test.js . CMD node ./sum.js Another option is to COPY the entire source tree into the image (Javascript files are pretty small compared to a complete Node installation) and use a docker run command to pick which script to run.
Docker: Copy file out of container while building it
I build the following image with docker build -t mylambda . I now try to export lambdatest.zip to my localhost while building it so I see the .zip file on my Desktop. So far I used docker cp <Container ID>:/var/task/lambdatest.zip ~/Desktop but that doesn't work inside my Dockerfile (?). Do you have any ideas? FROM lambci/lambda:build-python3.7 COPY lambda_function.py . RUN python3 -m venv venv RUN . venv/bin/activate # ZIP RUN pushd /var/task/venv/lib/python3.7/site-packages/ # Execute "zip" in bash for explanation of -9qr RUN zip -9qr /var/task/lambdatest.zip * Dockerfile (updated): FROM lambci/lambda:build-python3.7 RUN python3 -m venv venv RUN . venv/bin/activate RUN pip install --upgrade pip RUN pip install pystan==2.18 RUN pip install fbprophet WORKDIR /var/task/venv/lib/python3.7/site-packages COPY lambda_function.py . COPY .lambdaignore . RUN echo "Package size: $(du -sh | cut -f1)" RUN zip -9qr lambdatest.zip * RUN cat .lambdaignore | xargs zip -9qr /var/task/lambdatest.zip * -x
The typical answer is you do not. A Dockerfile does not have access to write files out to the host, by design, just as it does not have access to read arbitrary files from outside of the build context. There are various reasons for that, including security (you don't want an image build dropping a backdoor on a build host in the cloud) and reproducibility (images should not have dependencies outside of their context). As a result, you need to take an extra step to extract contexts of an image back to the host. Typically this involves creating a container a running a docker cp command, along the lines of the following: docker build -t your_image . docker create --name extract your_image docker cp extract:/path/to/files /path/on/host docker rm extract Or it can involve I/O pipes, where you run a tar command inside the container to package the files, and pipe that to a tar command running on the host to save the files. docker build -t your_image docker run --rm your_image tar -cC /path/in/container . | tar -xC /path/on/host Recently, Docker has been working on buildx which is currently experimental. Using that, you can create a stage that consists of the files you want to export to the host and use the --output option to write that stage to the host rather than to an image. Your Dockerfile would then look like: FROM lambci/lambda:build-python3.7 as build COPY lambda_function.py . RUN python3 -m venv venv RUN . venv/bin/activate # ZIP RUN pushd /var/task/venv/lib/python3.7/site-packages/ # Execute "zip" in bash for explanation of -9qr RUN zip -9qr /var/task/lambdatest.zip * FROM scratch as artifact COPY --from=build /var/task/lambdatest.zip /lambdatest.zip FROM build as release And then the build command to extract the zip file would look like: docker buildx build --target=artifact --output type=local,dest=$(pwd)/out/ . I believe buildx is still marked as experimental in the latest release, so to enable that, you need at least the following json entry in $HOME/.docker/config.json: { "experimental": "enabled" } And then for all the buildx features, you will want to create a non-default builder with docker buildx create. With recent versions of the docker CLI, integration to buildkit has exposed more options. Now it's no longer needed to run buildx to get access to the output flag. That means the above changes to: docker build --target=artifact --output type=local,dest=$(pwd)/out/ . If buildkit hasn't been enabled on your version (should be on by default in 20.10), you can enable it in your shell with: export DOCKER_BUILDKIT=1 or for the entire host, you can make it the default with the following in /etc/docker/daemon.json: { "features": {"buildkit": true } } And to use the daemon.json the docker engine needs to be reloaded: systemctl reload docker
Since docker 18.09, it natively supports a custom backend called BuildKit: DOCKER_BUILDKIT=1 docker build -o target/folder myimage This allows you to copy your latest stage to target/folder. If you want only specific files and not an entire filesystem, you can add a stage to your build: FROM XXX as builder-stage # Your existing dockerfile stages FROM scratch COPY --from=builder-stage /file/to/export / Note: You will need your docker client and engine to be compatible with Docker Engine API 1.40+, otherwise docker will not understand the -o flag. Reference: https://docs.docker.com/engine/reference/commandline/build/#custom-build-outputs
Docker file: I want to invoke one script from docker file
I am creating one docker image name with soaphonda. the code of docker file is below FROM centos:7 FROM python:2.7 FROM java:openjdk-7-jdk MAINTAINER Daniel Davison <sircapsalot#gmail.com> # Version ENV SOAPUI_VERSION 5.3.0 COPY entry_point.sh /opt/bin/entry_point.sh COPY server.py /opt/bin/server.py COPY server_index.html /opt/bin/server_index.html COPY SoapUI-5.3.0.tar.gz /opt/SoapUI-5.3.0.tar.gz COPY exit.sh /opt/bin/exit.sh RUN chmod +x /opt/bin/entry_point.sh RUN chmod +x /opt/bin/server.py # Download and unarchive SoapUI RUN mkdir -p /opt WORKDIR /opt RUN tar -xvf SoapUI-5.3.0.tar.gz . # Set working directory WORKDIR /opt/bin # Set environment ENV PATH ${PATH}:/opt/SoapUI-5.3.0/bin EXPOSE 3000 RUN chmod +x /opt/SoapUI-5.3.0/bin/mockservicerunner.sh CMD ["/opt/bin/entry_point.sh","exit","pwd", "sh", "/Users/ankitsrivastava/Documents/SametimeFileTransfers/Honda/files/hondascript.sh"] My image creation is getiing successfull. I want that once the image creation is done it should retag and push in the docker hub. For that i have created the script which is below; docker tag soaphonda ankiksri/soaphonda docker push ankiksri/soaphonda docker login docker run -d -p 8089:8089 --name demo ankiksri/soaphonda containerid=`docker ps -aqf "name=demo"` echo $containerid docker exec -it $containerid bash -c 'cd ../SoapUI-5.3.0;sh /opt/SoapUI-5.3.0/bin/mockservicerunner.sh "/opt/SoapUI-5.3.0/Honda-soapui-project.xml"' Please help me how i can call the second script from docker file and the exit command is not working in docker file.
What you have to understand here is what you are specifying within the Dockerfile are the commands that gets executed when you build and run a Docker container from the image you have created using your Dockerfile. So Docker image tag, push running should all done after you have built the Docker image from the Dockerfile. It cannot be done within the Dockerfile itself. To achieve this kind of a thing you would have to use a build tool like Maven (an example) and automate the process of tagging, pushing the image. Also by looking at your image, I don't see any nessactiy to keep on tagging and pushing the image unless you are continuously updating the image. Also there is no point of using three FROM commands as it will unnecessarily make your Docker image size huge.
Docker: No such file or directory
[root#mymachine redisc]# ls app.py Dockerfile redis.conf redis-server requirements.txt [root#mymachine redisc]# cat Dockerfile # Use an official Python runtime as a parent image FROM python:2.7-slim #FROM alpine:3.7 # Define mountable directories. VOLUME ["/x/build/"] # Set the working directory to /app WORKDIR /app # Copy the current directory contents into the container at /app ADD . /app # Make port 80 available to the world outside this container EXPOSE 6379 # Define environment variable ENV NAME Redis # Run app.py when the container launches CMD ["/app/redis-server", "/app/redis_rtp.conf"] I've built the image as myredis [root#mymachine redisc]# docker run -p 6379:6379 *** FATAL CONFIG FILE ERROR *** Reading the configuration file, at line 104 >>> 'logfile /x/build/redis/logs/redis_6379_container.log' Can't open the log file: No such file or directory the above gave me an error so I've tried supplying the path [root#mymachine redisc]# docker run -p 6379:6379 -v /x/build/redis/log myredis It gave me the same error but the dir exists. [root#mymachine9 redisc]# ls /x/build/redis/logs/ redis2_6379.log redis_6379.log Why isn't the dir not accessible from the container? how can I fix it? thank you
VOLUME ["/x/build/"] means that you want to mount /x/build/ dir of a container into the host OS. In contrast, I think you expect that the container mounts /x/build/ of host OS into the container. That is why I asked [root#mymachine9 redisc]# ls /x/build/redis/logs/ is in the container or host OS and that is why docker returns the error No such file or directory. Because docker will just have just empty /x/build/ dir. (if the base image doesn't have the /x/build/, docker will create the dir) For example, # Add into you Dockferfile RUN mkdir -p /testDir && touch /testDir/test && echo "test1234" >> /testDir/test VOLUME ["/testDir"] --------- # Run a container $ docker run --name test image_name # Check mount position $ docker inspect test -f {{.Mounts}} [{volume 09e3cedef5ceeef0cbd944785e0ea629d4c65a20b10d1384bbd50a1c67879845 /var/lib/docker/volumes/09e3cedef5ceeef0cbd944785e0ea629d4c65a20b10d1384bbd50a1c67879845/_data /testDir local true }] # Move to mount position $ cd /var/lib/docker/volumes/09e3cedef5ceeef0cbd944785e0ea629d4c65a20b10d1384bbd50a1c67879845/_data # Check if the content is from testDir of base image. $ ls test $ cat test test1234 As #fernandezcuesta comment, you can use bind option for your purpose. -v /x/build/redis/logs:/x/build/redis/logs or --mount type=bind,source=/x/build/redis/logs,target=/x/build/redis/logs -----Edit----- For now, there is no way to do bind option in Dockerfile, that is while building an image. Refer this issue, you can find why docker doesn't support that. In short bind mounts are linked to the host, as Dockerfiles can be shared, it would break compatibility (a Dockerfile with your bind mounts won't work on my machine) bind mounts are more related to the run than to the build.