Docker Container Migration with criu - docker

With Help from Saied Kazemi I was able to checkpoint and migrate a container using criu on ubuntu 14 by this docker suspend and resume using criu
Now I am trying to migrate this container from one location to another.
I am using these steps:
export cid=$(docker run -d ubuntu tail -f /dev/null)
docker exec $cid touch /test.walid
mkdir /tmp/docker-migration
mkdir /tmp/docker-migration/$cid
docker checkpoint --image-dir=/tmp/docker-migration/$cid $cid
ssh walid#192.168.1.10 mkdir /tmp/docker-migration
ssh walid#192.168.1.10 mkdir /tmp/docker-migration/$cid
scp -r /tmp/docker-migration/$cid walid#192.168.1.10:/tmp/docker-migration
ssh walid#192.168.1.10 mkdir /tmp/$cid
scp -r /var/lib/docker/0.0/containers/$cid walid#192.168.1.13:/tmp
ssh -t walid#192.168.1.10 sudo mv /tmp/$cid /var/lib/docker/0.0/containers/
ssh -t walid#192.168.1.10 sudo docker restore --force=true --image-dir=/tmp/docker-migration/$cid $cid
and Got this response
Error response from daemon: No such container: fea338e81750b2377c2a845e30c49b7055519e39448091715c2c6a7896da3562
Error: failed to restore one or more containers
Both Machines have docker and criu installed and checkpoint works alone.

Docker container migration using CRIU is still under development. So far the focus of checkpoint and restore integration into Docker has been C/R'ing on the same machine.
That said, it is possible to manually migrate containers by not only copying the container image created by CRIU after checkpoint (as you have done) but also by copying the container directory created by Docker in /var/lib/docker/0.0/containers/$cid as well as the container's root filesystem in /var/lib/docker/0.0/image. Manually migrating container's filesystem is a bit tricky specially if you are using a union filesystem like AUFS or OverlayFS. Also, you need to restart the Docker daemon on the destination machine to see the container.

On the destination machine, you have to create or run a container. This container will be overwritten by the restored image.
So :
ssh -t walid#192.168.1.10 export NewID=$(docker run -d ubuntu tail -f /dev/null)
ssh -t walid#192.168.1.10 sudo docker restore --force=true --image-dir=/tmp/docker-migration/$cid $newID
In my case, that worked like a charm!

here is my test:
migration container across nodes.
restore:
vagrant ssh vm2 -- 'docker run --name=foo -d ubuntu tail -f /dev/null && docker rm -f foo'
docker create --name=CONTAINNER_NAME base_image
docker restore --force=true --image-dir=/tmp/{dump files}
github: https://github.com/hixichen/criu_test

Related

How to print the current directory of the docker image which is running in a centOS7 OS from windows docker desktop [duplicate]

I've noticed with docker that I need to understand what's happening inside a container or what files exist in there. One example is downloading images from the docker index - you don't have a clue what the image contains so it's impossible to start the application.
What would be ideal is to be able to ssh into them or equivalent. Is there a tool to do this, or is my conceptualisation of docker wrong in thinking I should be able to do this.
Here are a couple different methods...
A) Use docker exec (easiest)
Docker version 1.3 or newer supports the command exec that behave similar to nsenter. This command can run new process in already running container (container must have PID 1 process running already). You can run /bin/bash to explore container state:
docker exec -t -i mycontainer /bin/bash
see Docker command line documentation
B) Use Snapshotting
You can evaluate container filesystem this way:
# find ID of your running container:
docker ps
# create image (snapshot) from container filesystem
docker commit 12345678904b5 mysnapshot
# explore this filesystem using bash (for example)
docker run -t -i mysnapshot /bin/bash
This way, you can evaluate filesystem of the running container in the precise time moment. Container is still running, no future changes are included.
You can later delete snapshot using (filesystem of the running container is not affected!):
docker rmi mysnapshot
C) Use ssh
If you need continuous access, you can install sshd to your container and run the sshd daemon:
docker run -d -p 22 mysnapshot /usr/sbin/sshd -D
# you need to find out which port to connect:
docker ps
This way, you can run your app using ssh (connect and execute what you want).
D) Use nsenter
Use nsenter, see Why you don't need to run SSHd in your Docker containers
The short version is: with nsenter, you can get a shell into an
existing container, even if that container doesn’t run SSH or any kind
of special-purpose daemon
UPDATE: EXPLORING!
This command should let you explore a running docker container:
docker exec -it name-of-container bash
The equivalent for this in docker-compose would be:
docker-compose exec web bash
(web is the name-of-service in this case and it has tty by default.)
Once you are inside do:
ls -lsa
or any other bash command like:
cd ..
This command should let you explore a docker image:
docker run --rm -it --entrypoint=/bin/bash name-of-image
once inside do:
ls -lsa
or any other bash command like:
cd ..
The -it stands for interactive... and tty.
This command should let you inspect a running docker container or image:
docker inspect name-of-container-or-image
You might want to do this and find out if there is any bash or sh in there. Look for entrypoint or cmd in the json return.
NOTE: This answer relies on commen tool being present, but if there is no bash shell or common tools like ls present you could first add one in a layer if you have access to the Dockerfile:
example for alpine:
RUN apk add --no-cache bash
Otherwise if you don't have access to the Dockerfile then just copy the files out of a newly created container and look trough them by doing:
docker create <image> # returns container ID the container is never started.
docker cp <container ID>:<source_path> <destination_path>
docker rm <container ID>
cd <destination_path> && ls -lsah
see docker exec documentation
see docker-compose exec documentation
see docker inspect documentation
see docker create documentation
In case your container is stopped or doesn't have a shell (e.g. hello-world mentioned in the installation guide, or non-alpine traefik), this is probably the only possible method of exploring the filesystem.
You may archive your container's filesystem into tar file:
docker export adoring_kowalevski > contents.tar
Or list the files:
docker export adoring_kowalevski | tar t
Do note, that depending on the image, it might take some time and disk space.
Before Container Creation :
If you to explore the structure of the image that is mounted inside the container you can do
sudo docker image save image_name > image.tar
tar -xvf image.tar
This would give you the visibility of all the layers of an image and its configuration which is present in json files.
After container creation :
For this there are already lot of answers above. my preferred way to do
this would be -
docker exec -t -i container /bin/bash
The most upvoted answer is working for me when the container is actually started, but when it isn't possible to run and you for example want to copy files from the container this has saved me before:
docker cp <container-name>:<path/inside/container> <path/on/host/>
Thanks to docker cp (link) you can copy directly from the container as it was any other part of your filesystem.
For example, recovering all files inside a container:
mkdir /tmp/container_temp
docker cp example_container:/ /tmp/container_temp/
Note that you don't need to specify that you want to copy recursively.
The file system of the container is in the data folder of docker, normally in /var/lib/docker. In order to start and inspect a running containers file system do the following:
hash=$(docker run busybox)
cd /var/lib/docker/aufs/mnt/$hash
And now the current working directory is the root of the container.
you can use dive to view the image content interactively with TUI
https://github.com/wagoodman/dive
Try using
docker exec -it <container-name> /bin/bash
There might be possibility that bash is not implemented. for that you can use
docker exec -it <container-name> sh
On Ubuntu 14.04 running Docker 1.3.1, I found the container root filesystem on the host machine in the following directory:
/var/lib/docker/devicemapper/mnt/<container id>/rootfs/
Full Docker version information:
Client version: 1.3.1
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 4e9bbfa
OS/Arch (client): linux/amd64
Server version: 1.3.1
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): 4e9bbfa
In my case no shell was supported in container except sh. So, this worked like a charm
docker exec -it <container-name> sh
I use another dirty trick that is aufs/devicemapper agnostic.
I look at the command that the container is running e.g. docker ps
and if it's an apache or java i just do the following:
sudo -s
cd /proc/$(pgrep java)/root/
and voilá you're inside the container.
Basically you can as root cd into /proc/<PID>/root/ folder as long as that process is run by the container. Beware symlinks will not make sense wile using that mode.
The most voted answer is good except if your container isn't an actual Linux system.
Many containers (especially the go based ones) don't have any standard binary (no /bin/bash or /bin/sh). In that case, you will need to access the actual containers file directly:
Works like a charm:
name=<name>
dockerId=$(docker inspect -f {{.Id}} $name)
mountId=$(cat /var/lib/docker/image/aufs/layerdb/mounts/$dockerId/mount-id)
cd /var/lib/docker/aufs/mnt/$mountId
Note: You need to run it as root.
Only for LINUX
The most simple way that I use was using proc dir, the container must be running in order to inspect the docker container files.
Find out the process id (PID) of the container and store it into some variable
PID=$(docker inspect -f '{{.State.Pid}}' your-container-name-here)
Make sure the container process is running, and use the variable name to get into the container folder
cd /proc/$PID/root
If you want to get through the dir without finding out the PID number, just use this long command
cd /proc/$(docker inspect -f '{{.State.Pid}}' your-container-name-here)/root
Tips:
After you get inside the container, everything you do will affect the actual process of the container, such as stopping the service or changing the port number.
Hope it helps
Note:
This method only works if the container is still running, otherwise, the directory wouldn't exist anymore if the container has stopped or removed
None of the existing answers address the case of a container that exited (and can't be restarted) and/or doesn't have any shell installed (e.g. distroless ones). This one works as long has you have root access to the Docker host.
For a real manual inspection, find out the layer IDs first:
docker inspect my-container | jq '.[0].GraphDriver.Data'
In the output, you should see something like
"MergedDir": "/var/lib/docker/overlay2/03e8df748fab9526594cfdd0b6cf9f4b5160197e98fe580df0d36f19830308d9/merged"
Navigate into this folder (as root) to find the current visible state of the container filesystem.
This will launch a bash session for the image:
docker run --rm -it --entrypoint=/bin/bash
On newer versions of Docker you can run docker exec [container_name] which runs a shell inside your container
So to get a list of all the files in a container just run docker exec [container_name] ls
I wanted to do this, but I was unable to exec into my container as it had stopped and wasn't starting up again due to some error in my code.
What worked for me was to simply copy the contents of the entire container into a new folder like this:
docker cp container_name:/app/ new_dummy_folder
I was then able to explore the contents of this folder as one would do with a normal folder.
For me, this one works well (thanks to the last comments for pointing out the directory /var/lib/docker/):
chroot /var/lib/docker/containers/2465790aa2c4*/root/
Here, 2465790aa2c4 is the short ID of the running container (as displayed by docker ps), followed by a star.
For docker aufs driver:
The script will find the container root dir(Test on docker 1.7.1 and 1.10.3 )
if [ -z "$1" ] ; then
echo 'docker-find-root $container_id_or_name '
exit 1
fi
CID=$(docker inspect --format {{.Id}} $1)
if [ -n "$CID" ] ; then
if [ -f /var/lib/docker/image/aufs/layerdb/mounts/$CID/mount-id ] ; then
F1=$(cat /var/lib/docker/image/aufs/layerdb/mounts/$CID/mount-id)
d1=/var/lib/docker/aufs/mnt/$F1
fi
if [ ! -d "$d1" ] ; then
d1=/var/lib/docker/aufs/diff/$CID
fi
echo $d1
fi
This answer will help those (like myself) who want to explore the docker volume filesystem even if the container isn't running.
List running docker containers:
docker ps
=> CONTAINER ID "4c721f1985bd"
Look at the docker volume mount points on your local physical machine (https://docs.docker.com/engine/tutorials/dockervolumes/):
docker inspect -f {{.Mounts}} 4c721f1985bd
=> [{ /tmp/container-garren /tmp true rprivate}]
This tells me that the local physical machine directory /tmp/container-garren is mapped to the /tmp docker volume destination.
Knowing the local physical machine directory (/tmp/container-garren) means I can explore the filesystem whether or not the docker container is running. This was critical to helping me figure out that there was some residual data that shouldn't have persisted even after the container was not running.
If you are using Docker v19.03, you follow the below steps.
# find ID of your running container:
docker ps
# create image (snapshot) from container filesystem
docker commit 12345678904b5 mysnapshot
# explore this filesystem
docker run -t -i mysnapshot /bin/sh
For an already running container, you can do:
dockerId=$(docker inspect -f {{.Id}} [docker_id_or_name])
cd /var/lib/docker/btrfs/subvolumes/$dockerId
You need to be root in order to cd into that dir. If you are not root, try 'sudo su' before running the command.
Edit: Following v1.3, see Jiri's answer - it is better.
another trick is to use the atomic tool to do something like:
mkdir -p /path/to/mnt && atomic mount IMAGE /path/to/mnt
The Docker image will be mounted to /path/to/mnt for you to inspect it.
My preferred way to understand what is going on inside container is:
expose -p 8000
docker run -it -p 8000:8000 image
Start server inside it
python -m SimpleHTTPServer
If you are using the AUFS storage driver, you can use my docker-layer script to find any container's filesystem root (mnt) and readwrite layer :
# docker-layer musing_wiles
rw layer : /var/lib/docker/aufs/diff/c83338693ff190945b2374dea210974b7213bc0916163cc30e16f6ccf1e4b03f
mnt : /var/lib/docker/aufs/mnt/c83338693ff190945b2374dea210974b7213bc0916163cc30e16f6ccf1e4b03f
Edit 2018-03-28 :
docker-layer has been replaced by docker-backup
The docker exec command to run a command in a running container can help in multiple cases.
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
Options:
-d, --detach Detached mode: run command in the background
--detach-keys string Override the key sequence for detaching a
container
-e, --env list Set environment variables
-i, --interactive Keep STDIN open even if not attached
--privileged Give extended privileges to the command
-t, --tty Allocate a pseudo-TTY
-u, --user string Username or UID (format:
[:])
-w, --workdir string Working directory inside the container
For example :
1) Accessing in bash to the running container filesystem :
docker exec -it containerId bash
2) Accessing in bash to the running container filesystem as root to be able to have required rights :
docker exec -it -u root containerId bash
This is particularly useful to be able to do some processing as root in a container.
3) Accessing in bash to the running container filesystem with a specific working directory :
docker exec -it -w /var/lib containerId bash
Often times I only need to explore the docker filesystem because my build won't run, so docker run -it <container_name> bash is impractical. I also do not want to waste time and memory copying filesystems, so docker cp <container_name>:<path> <target_path> is impractical too.
While possibly unorthodox, I recommend re-building with ls as the final command in the Dockerfile:
CMD [ "ls", "-R" ]
I've found the easiest, all-in-one solution to View, Edit, Copy files with a GUI app inside almost any running container.
mc editing files in docker
inside the container install mc and ssh: docker exec -it <container> /bin/bash, then with prompt install mc and ssh packages
in same exec-bash console, run mc
press ESC then 9 then ENTER to open menu and select "Shell link..."
using "Shell link..." open SCP-based filesystem access to any host with ssh server running (including the one running docker) by it's IP address
do your job in graphical UI
this method overcomes all issues with permissions, snap isolation etc., allows to copy directly to any machine and is the most pleasant to use for me
I had an unknown container, that was doing some production workload and did not want to run any command.
So, I used docker diff.
This will list all files that the container had changed and therefore good suited to explore the container file system.
To get only a folder you can just use grep:
docker diff <container> | grep /var/log
It will not show files from the docker image. Depending on your use case this can help or not.
Late to the party, but in 2022 we have VS Code

How to access to a docker container via SSH using IP address?

I'm using NVIDIA Docker in a Linux machine (Ubuntu 20.04). I've created a container named user1 using nvidia/cuda:11.0-base image as follows:
docker run --gpus all --name user1 -dit nvidia/cuda:11.0-base /bin/bash
And, here is what I see if I run docker ps -a:
admin#my_desktop:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a365362840de nvidia/cuda:11.0-base "/bin/bash" 3 seconds ago Up 2 seconds user1
I want to access to that container via ssh using its unique IP address from a totally different machine (other than my_desktop, which is the host). First of all, is it possible to grant each container a unique IP address? If so, how can I do it? Thanks in advance.
In case you want to access to your container with ssh from an external VM, you need to do the following
Install the ssh daemon for your container
Run the container and expose its ssh port
I would propose the following Dockerfile, which builds from nvidia/cuda:11.0-base and creates an image with the ssh daemon inside
Dockerfile
# Instruction for Dockerfile to create a new image on top of the base image (nvidia/cuda:11.0-base)
FROM nvidia/cuda:11.0-base
ARG root_password
RUN apt-get update || echo "OK" && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo "root:${root_password}" | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Build the image from the Dockerfile
docker image build --build-arg root_password=password --tag nvidia/cuda:11.0-base-ssh .
Create the container
docker container run -d -P --name ssh nvidia/cuda:11.0-base-ssh
Run docker ps to see the container port
Finally, access the container
ssh -p 49157 root#<VM_IP>
EDIT: As David Maze correctly pointed out. You should be aware that the root password will be visible in the image history. Also this way overwrites the original container process.
This process, if it is to be adopted it needs to be modified in case you need it for production use. This serves as a starting point for someone who wishes to add ssh to his container.

docker inside docker container

I want to install docker inside a running docker container.
docker run -it centos:centos7
My base container is using centos, I can login to running container using docker exec. But when I try to install docker inside it using yum install -y docker it installs.
But somehow I can't start the docker service with docker -d &, it gives me error as:
INFO[0000] Option DefaultNetwork: bridge
WARN[0000] Running modprobe bridge nf_nat br_netfilter failed with message: , error: exit status 1
FATA[0000] Error starting daemon: Error initializing network controller: Error initializing bridge driver: Setup IP forwarding failed: open /proc/sys/net/ipv4/ip_forward: read-only file system
Is there a way I can install docker inside docker container or build image already having running docker? I have already seen these examples but none works for me.
The output of uname -r on the host machine:
[fedora# ~]$ uname -r
4.2.6-200.fc22.x86_64
Any help would be appreciated.
Thanks in advance
Update
Thanks to https://stackoverflow.com/a/38016704/372019 I want to show another approach.
Instead of mounting the host's docker binary, you should copy or install a container specific release of the docker binary. Since you're only using it in a client mode, you won't need to install it as a system service. You still need to mount the Docker socket into the container so that you can easily communicate with the host's Docker engine.
Assuming that you got a base image with a working Docker binary (e.g. the official docker image), the example now looks like this:
docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
docker:1.12 docker info
Without actually answering your question I'd suggest you to read Using Docker-in-Docker for your CI or testing environment? Think twice.
It explains why running docker-in-docker should be replaced with a setup where Docker containers run as siblings of the "outer" or "base" container. The article also links to the original https://github.com/jpetazzo/dind project where you can find working examples how to run Docker in Docker - in case you still want to have docker-in-docker.
An example how to enable a container to access the host's Docker daemon look like this:
docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
-v /usr/bin/docker:/usr/bin/docker\
busybox:latest /usr/bin/docker info
If you are on Mac with Docker toolbox.
The below command WON’T WORK
docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
-v /usr/bin/docker:/usr/bin/docker\
busybox:latest /usr/bin/docker info
Because /var/run/docker.sock will not be on your OSX filesystem
the Docker daemon is running inside the boot2docker VM - and that's where the unix socket is.
So you have to run the container from boot2docker VM
$ docker-machine ssh default
$ docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
-v $(which docker):/usr/bin/docker\
busybox:latest /usr/bin/docker info
$ exit
This looks like Docker-in-Docker, feels like Docker-in-Docker, but it’s not Docker-in-Docker, when this container will create more containers, those containers will be created in the top-level Docker.
You need the --privileged parameter.
By default, Docker containers are “unprivileged” and cannot, for
example, run a Docker daemon inside a Docker container.
Source
Run your base image with the command docker run --privileged -it centos:centos7 bash. Then you may install and run another docker container inside that container.
I`ve a similar problems in my vms.
I`ve solve the problem with change the storage file system from image to vfs(in daemon.json file)
like the image bellow
For image works first create a base image, in my case with centos7
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
with this image builded (in my case i called local/c7-systemd) create a second image, installing docker and moving daemon.json to inside.
FROM local/c7-systemd
RUN yum install -y yum-utils
RUN yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
RUN yum install -y docker-ce docker-ce-cli containerd.io
RUN curl -L "https://github.com/docker/compose/releases/download/1.28.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
RUN ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
COPY daemon.json /etc/docker/daemon.json
RUN yum install -y nano
RUN systemctl enable docker
EXPOSE 80
EXPOSE 8080
EXPOSE 8161
EXPOSE 6379
EXPOSE 8761
CMD ["/usr/sbin/init"]
enjoy!

`docker cp` doesn't copy file into container

I have a dockerized project. I build, copy a file from the host system into the docker container, and then shell into the container to find that the file isn't there. How is docker cp supposed to work?
$ docker build -q -t foo .
Sending build context to Docker daemon 64 kB
Step 0 : FROM ubuntu:14.04
---> 2d24f826cb16
Step 1 : MAINTAINER Brandon Istenes <redacted#email.com>
---> Using cache
---> f53a163ef8ce
Step 2 : RUN apt-get update
---> Using cache
---> 32b06b4131d4
Successfully built 32b06b4131d4
$ docker cp ~/.ssh/known_hosts foo:/root/.ssh/known_hosts
$ docker run -it foo bash
WARNING: Your kernel does not support memory swappiness capabilities, memory swappiness discarded.
root#421fc2866b14:/# ls /root/.ssh
root#421fc2866b14:/#
So there was some mix-up with the names of images and containers. Obviously, the cp operation was acting on a different container than I brought up with the run command. In any case, the correct procedure is:
# Build the image, call it foo-build
docker build -q -t foo-build .
# Create a container from the image called foo-tmp
docker create --name foo-tmp foo-build
# Run the copy command on the container
docker cp /src/path foo-tmp:/dest/path
# Commit the container as a new image
docker commit foo-tmp foo
# The new image will have the files
docker run foo ls /dest
You need to docker exec to get into your container, your command creates a new container.
I have this alias to get into the last created container with the shell of the container
alias exec_last='docker exec -it $(docker ps -lq) $(docker inspect -f {{'.Path'}} $(docker ps -lq))'
What docker version are you using? As per Docker 1.8 cp supports copying from host to container:
• Copy files from host to container: docker cp used to only copy files from a container out to the host, but it now works the other way round: docker cp foo.txt mycontainer:/foo.txt
Please note the difference between images and containers. If you want that every container that you create from that Dockerfile contains that file (even if you don't copy afterward) you can use COPY and ADD in the Dockerfile. If you want to copy the file after the container is created from the image, you can use the docker cp command in version 1.8.

How to copy files from host to Docker container?

I am trying to build a backup and restore solution for the Docker containers that we work with.
I have Docker base image that I have created, ubuntu:base, and do not want have to rebuild it each time with a Docker file to add files to it.
I want to create a script that runs from the host machine and creates a new container using the ubuntu:base Docker image and then copies files into that container.
How can I copy files from the host to the container?
The cp command can be used to copy files.
One specific file can be copied TO the container like:
docker cp foo.txt container_id:/foo.txt
One specific file can be copied FROM the container like:
docker cp container_id:/foo.txt foo.txt
For emphasis, container_id is a container ID, not an image ID. (Use docker ps to view listing which includes container_ids.)
Multiple files contained by the folder src can be copied into the target folder using:
docker cp src/. container_id:/target
docker cp container_id:/src/. target
Reference: Docker CLI docs for cp
In Docker versions prior to 1.8 it was only possible to copy files from a container to the host. Not from the host to a container.
Get container name or short container id:
$ docker ps
Get full container id:
$ docker inspect -f '{{.Id}}' SHORT_CONTAINER_ID-or-CONTAINER_NAME
Copy file:
$ sudo cp path-file-host /var/lib/docker/aufs/mnt/FULL_CONTAINER_ID/PATH-NEW-FILE
EXAMPLE:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d8e703d7e303 solidleon/ssh:latest /usr/sbin/sshd -D cranky_pare
$ docker inspect -f '{{.Id}}' cranky_pare
or
$ docker inspect -f '{{.Id}}' d8e703d7e303
d8e703d7e3039a6df6d01bd7fb58d1882e592a85059eb16c4b83cf91847f88e5
$ sudo cp file.txt /var/lib/docker/aufs/mnt/**d8e703d7e3039a6df6d01bd7fb58d1882e592a85059eb16c4b83cf91847f88e5**/root/file.txt
The cleanest way is to mount a host directory on the container when starting the container:
{host} docker run -v /path/to/hostdir:/mnt --name my_container my_image
{host} docker exec -it my_container bash
{container} cp /mnt/sourcefile /path/to/destfile
Typically there are three types:
From a container to the host
docker cp container_id:./bar/foo.txt .
Also docker cp command works both ways too.
From the host to a container
docker exec -i container_id sh -c 'cat > ./bar/foo.txt' < ./foo.txt
Second approach to copy from host to container:
docker cp foo.txt mycontainer:/foo.txt
From a container to a container mixes 1 and 2
docker cp container_id1:./bar/foo.txt .
docker exec -i container_id2 sh -c 'cat > ./bar/foo.txt' < ./foo.txt
The following is a fairly ugly way of doing it but it works.
docker run -i ubuntu /bin/bash -c 'cat > file' < file
If you need to do this on a running container you can use docker exec (added in 1.3).
First, find the container's name or ID:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b9b7400ddd8f ubuntu:latest "/bin/bash" 2 seconds ago Up 2 seconds elated_hodgkin
In the example above we can either use b9b7400ddd8f or elated_hodgkin.
If you wanted to copy everything in /tmp/somefiles on the host to /var/www in the container:
$ cd /tmp/somefiles
$ tar -cv * | docker exec -i elated_hodgkin tar x -C /var/www
We can then exec /bin/bash in the container and verify it worked:
$ docker exec -it elated_hodgkin /bin/bash
root#b9b7400ddd8f:/# ls /var/www
file1 file2
Create a new dockerfile and use the existing image as your base.
FROM myName/myImage:latest
ADD myFile.py bin/myFile.py
Then build the container:
docker build .
The solution is given below,
From the Docker shell,
root#123abc:/root# <-- get the container ID
From the host
cp thefile.txt /var/lib/docker/devicemapper/mnt/123abc<bunch-o-hex>/rootfs/root
The file shall be directly copied to the location where the container sits on the filesystem.
Another solution for copying files into a running container is using tar:
tar -c foo.sh | docker exec -i theDockerContainer /bin/tar -C /tmp -x
Copies the file foo.sh into /tmp of the container.
Edit: Remove reduntant -f, thanks to Maartens comment.
To copy a file from host to running container
docker exec -i $CONTAINER /bin/bash -c "cat > $CONTAINER_PATH" < $HOST_PATH
Based on Erik's answer and Mikl's and z0r's comments.
This is a direct answer to the question 'Copying files from host to Docker container' raised in this question in the title.
Try docker cp. It is the easiest way to do that and works even on my Mac. Usage:
docker cp /root/some-file.txt some-docker-container:/root
This will copy the file some-file.txt in the directory /root on your host machine into the Docker container named some-docker-container into the directory /root. It is very close to the secure copy syntax. And as shown in the previous post, you can use it vice versa. I.e., you also copy files from the container to the host.
And before you downlink this post, please enter docker cp --help. Reading the documentation can be very helpful, sometimes...
If you don't like that way and you want data volumes in your already created and running container, then recreation is your only option today. See also How can I add a volume to an existing Docker container?.
I tried most of the (upvoted) solutions here but in docker 17.09 (in 2018) there is no longer /var/lib/docker/aufs folder.
This simple docker cp solved this task.
docker cp c:\path\to\local\file container_name:/path/to/target/dir/
How to get container_name?
docker ps
There is a NAMES section. Don't use aIMAGE.
With Docker 1.8, docker cp is able to copy files from host to container. See the Docker blog post Announcing Docker 1.8: Content Trust, Toolbox, and Updates to Registry and Orchestration.
To copy files/folders between a container and the local filesystem, type the command:
docker cp {SOURCE_FILE} {DESTINATION_CONTAINER_ID}:/{DESTINATION_PATH}
For example,
docker cp /home/foo container-id:/home/dir
To get the contianer id, type the given command:
docker ps
The above content is taken from docker.com.
Assuming the container is already running, type the given command:
# cat /path/to/host/file/ | docker exec -i -t <container_id> bash -c "/bin/cat > /path/to/container/file"
To share files using shared directory, run the container by typing the given command:
# docker run -v /path/to/host/dir:/path/to/container/dir ...
Note: Problems with permissions might arise as container's users are not the same as the host's users.
This is the command to copy data from Docker to Host:
docker cp container_id:file path/filename /hostpath
docker cp a13fb9c9e674:/tmp/dgController.log /tmp/
Below is the command to copy data from host to docker:
docker cp a.txt ccfbeb35116b:/home/
Container Up Syntax:
docker run -v /HOST/folder:/Container/floder
In docker File
COPY hom* /myFolder/ # adds all files starting with "hom"
COPY hom?.txt /myFolder/ # ? is replaced with any single character, e.g., "home.txt"
In a docker environment, all containers are found in the directory:
/var/lib/docker/aufs/required-docker-id/
To copy the source directory/file to any part of the container, type the given command:
sudo cp -r mydir/ /var/lib/docker/aufs/mnt/required-docker-id/mnt/
Docker cp command is a handy utility that allows to copy files and folders between a container and the host system.
If you want to copy files from your host system to the container, you should use docker cp command like this:
docker cp host_source_path container:destination_path
List your running containers first using docker ps command:
abhishek#linuxhandbook:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
8353c6f43fba 775349758637 "bash" 8 seconds ago Up 7
seconds ubu_container
You need to know either the container ID or the container name. In my case, the docker container name is ubu_container. and the container ID is 8353c6f43fba.
If you want to verify that the files have been copied successfully, you can enter your container in the following manner and then use regular Linux commands:
docker exec -it ubu_container bash
Copy files from host system to docker container
Copying with docker cp is similar to the copy command in Linux.
I am going to copy a file named a.py to the home/dir1 directory in the container.
docker cp a.py ubu_container:/home/dir1
If the file is successfully copied, you won’t see any output on the screen. If the destination path doesn’t exist, you would see an error:
abhishek#linuxhandbook:~$ sudo docker cp a.txt ubu_container:/home/dir2/subsub
Error: No such container:path: ubu_container:/home/dir2
If the destination file already exists, it will be overwritten without any warning.
You may also use container ID instead of the container name:
docker cp a.py 8353c6f43fba:/home/dir1
If the host is CentOS or Fedora, there is a proxy NOT in /var/lib/docker/aufs, but it is under /proc:
cp -r /home/user/mydata/* /proc/$(docker inspect --format "{{.State.Pid}}" <containerid>)/root
This cmd will copy all contents of data directory to / of container with id "containerid".
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH
The destination path must be pre-exist
tar and docker cp are a good combo for copying everything in a directory.
Create a data volume container
docker create --name dvc --volume /path/on/container cirros
To preserve the directory hierarchy
tar -c -C /path/on/local/machine . | docker cp - dvc:/path/on/container
Check your work
docker run --rm --volumes-from dvc cirros ls -al /path/on/container
Many that find this question may actually have the problem of copying files into a Docker image while it is being created (I did).
In that case, you can use the COPY command in the Dockerfile that you use to create the image.
See the documentation.
In case it is not clear to someone like me what mycontainer in #h3nrik answer means, it is actually the container id. To copy a file WarpSquare.mp4 in /app/example_scenes/1440p60 from an exited docker container to current folder I used this.
docker cp `docker ps -q -l`:/app/example_scenes/1440p60/WarpSquare.mp4 .
where docker ps -q -l pulls up the container id of the last exited instance. In case it is not an exited container you can get it by docker container ls or docker ps
docker cp SRC_PATH CONTAINER_ID:DEST_PATH
For example, I want to copy my file xxxx/download/jenkins to tomcat
I start to get the id of the container Tomcat
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
63686740b488 tomcat "catalina.sh run" 12 seconds ago Up 11 seconds 0.0.0.0:8080->8080/tcp peaceful_babbage
docker cp xxxx/download/jenkins.war 63686740b488:usr/local/tomcat/webapps/
This is a onliner for copying a single file while running a tomcat container.
docker run -v /PATH_TO_WAR/sample.war:/usr/local/tomcat/webapps/myapp.war -it -p 8080:8080 tomcat
This will copy the war file to webapps directory and get your app running in no time.
My favorite method:
CONTAINERS:
CONTAINER_ID=$(docker ps | grep <string> | awk '{ print $1 }' | xargs docker inspect -f '{{.Id}}')
file.txt
mv -f file.txt /var/lib/docker/devicemapper/mnt/$CONTAINER_ID/rootfs/root/file.txt
or
mv -f file.txt /var/lib/docker/aufs/mnt/$CONTAINER_ID/rootfs/root/file.txt
The best way for copying files to the container I found is mounting a directory on host using -v option of docker run command.
There are good answers, but too specific. I find out docker ps is good way to get container id you're interested in. Then do
mount | grep <id>
to see where the volume is mounted. That's
/var/lib/docker/devicemapper/mnt/<id>/rootfs/
for me, but it might be a different path depending on the OS and configuration. Now simply copy files to that path.
Using -v is not always practical.
Try docker cp.
Usage:
docker cp CONTAINER:PATH HOSTPATH
It copies files/folders from PATH to the HOSTPATH.

Resources