How do I transfer a Docker image from one machine to another one without using a repository, no matter private or public?
I create my own image in VirtualBox, and when it is finished I try to deploy to other machines to have real usage.
Since it is based on my own based image (like Red Hat Linux), it cannot be recreated from a Dockerfile. My dockerfile isn't easily portable.
Are there simple commands I can use? Or another solution?
You will need to save the Docker image as a tar file:
docker save -o <path for generated tar file> <image name>
Then copy your image to a new system with regular file transfer tools such as cp, scp or rsync(preferred for big files). After that you will have to load the image into Docker:
docker load -i <path to image tar file>
PS: You may need to sudo all commands.
EDIT:
You should add filename (not just directory) with -o, for example:
docker save -o c:/myfile.tar centos:16
Transferring a Docker image via SSH, bzipping the content on the fly:
docker save <image> | bzip2 | ssh user#host docker load
Note that docker load automatically decompresses images for you. It supports gzip, bzip2 and xz.
It's also a good idea to put pv in the middle of the pipe to see how the transfer is going:
docker save <image> | bzip2 | pv | ssh user#host docker load
(More info about pv: home page, man page).
Important note from #Thomas Steinbach: on high bandwidth, bzip won't be able to compress fast enough. In case you can upload at 10 MB/s and more, gzip/gunzip will be much faster than bzip2.
If you're on 3G and your Internet is slow, #jgmjgm suggests that you can use xz: it offers a higher compression ratio.
To save an image to any file path or shared NFS place see the following example.
Get the image id by doing:
docker images
Say you have an image with id "matrix-data".
Save the image with id:
docker save -o /home/matrix/matrix-data.tar matrix-data
Copy the image from the path to any host. Now import to your local Docker installation using:
docker load -i <path to copied image file>
You can use a one-liner with DOCKER_HOST variable:
docker save app:1.0 | gzip | DOCKER_HOST=ssh://user#remotehost docker load
First save the Docker image to a compressed archive:
docker save <docker image name> | gzip > <docker image name>.tar.gz
Then load the exported image to Docker using the below command:
zcat <docker image name>.tar.gz | docker load
Run
docker images
to see a list of the images on the host. Let's say you have an image called awesomesauce. In your terminal, cd to the directory where you want to export the image to. Now run:
docker save awesomesauce:latest > awesomesauce.tar
Copy the tar file to a thumb drive or whatever, and then copy it to the new host computer.
Now from the new host do:
docker load < awesomesauce.tar
Now go have a coffee and read Hacker News...
The fastest way to save and load docker image through gzip command:
docker save <image_id> | gzip > image_file.tgz
To load your zipped image on another server use immediate this command, it will be recognized as zipped image:
docker load -i image_file.tgz
to rename, or re-tag the image use:
docker image tag <image_id> <image_path_name>:<version>
for example:
docker image tag 4444444 your_docker_or_harbor_path/ubuntu:14.0
For a flattened export of a container's filesystem, use;
docker export CONTAINER_ID > my_container.tar
Use cat my_container.tar | docker import - to import said image.
docker-push-ssh is a command line utility I created just for this scenario.
It sets up a temporary private Docker registry on the server, establishes an SSH tunnel from your localhost, pushes your image, then cleans up after itself.
The benefit of this approach over docker save (at the time of writing most answers are using this method) is that only the new layers are pushed to the server, resulting in a MUCH quicker upload.
Oftentimes using an intermediate registry like dockerhub is undesirable, and cumbersome.
https://github.com/brthor/docker-push-ssh
Install:
pip install docker-push-ssh
Example:
docker-push-ssh -i ~/my_ssh_key username#myserver.com my-docker-image
The biggest caveat is that you have to manually add your localhost to Docker's insecure_registries configuration. Run the tool once and it will give you an informative error:
Error Pushing Image: Ensure localhost:5000 is added to your insecure registries.
More Details (OS X): https://stackoverflow.com/questions/32808215/where-to-set-the-insecure-registry-flag-on-mac-os
Where should I set the '--insecure-registry' flag on Mac OS?
When using docker-machine, you can copy images between machines mach1 and mach2 with:
docker $(docker-machine config <mach1>) save <image> | docker $(docker-machine config <mach2>) load
And of course you can also stick pv in the middle to get a progess indicator:
docker $(docker-machine config <mach1>) save <image> | pv | docker $(docker-machine config <mach2>) load
You may also omit one of the docker-machine config sub-shells, to use your current default docker-host.
docker save <image> | docker $(docker-machine config <mach>) load
to copy image from current docker-host to mach
or
docker $(docker-machine config <mach>) save <image> | docker load
to copy from mach to current docker-host.
I assume you need to save couchdb-cartridge which has an image id of 7ebc8510bc2c:
stratos#Dev-PC:~$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
couchdb-cartridge latest 7ebc8510bc2c 17 hours ago 1.102 GB
192.168.57.30:5042/couchdb-cartridge latest 7ebc8510bc2c 17 hours ago 1.102 GB
ubuntu 14.04 53bf7a53e890 3 days ago 221.3 MB
Save the archiveName image to a tar file. I will use the /media/sf_docker_vm/ to save the image.
stratos#Dev-PC:~$ docker save imageID > /media/sf_docker_vm/archiveName.tar
Copy the archiveName.tar file to your new Docker instance using whatever method works in your environment, for example FTP, SCP, etc.
Run the docker load command on your new Docker instance and specify the location of the image tar file.
stratos#Dev-PC:~$ docker load < /media/sf_docker_vm/archiveName.tar
Finally, run the docker images command to check that the image is now available.
stratos#Dev-PC:~$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
couchdb-cartridge latest 7ebc8510bc2c 17 hours ago 1.102 GB
192.168.57.30:5042/couchdb-cartridge latest bc8510bc2c 17 hours ago 1.102 GB
ubuntu 14.04 4d2eab1c0b9a 3 days ago 221.3 MB
Please find this detailed post.
The best way to save all the images is like this :
docker save $(docker images --format '{{.Repository}}:{{.Tag}}') -o allimages.tar
Above code will save all the images in allimages.tar and to load the images go to the directory where you saved the images and run this command :
docker load -i allimages.tar
Just make sure to use this commands in PowerShell and not in Commad Prompt
REAL WORLD EXAMPLE
#host1
systemctl stop docker
systemctl start docker
docker commit -p 1d09068ef111 ubuntu001_bkp3
#create backup
docker save -o ubuntu001_bkp3.tar ubuntu001_bkp3
#upload ubuntu001_bkp3.tar to my online drive
aws s3 cp ubuntu001_bkp3.tar s3://mybucket001/
#host2
systemctl stop docker
systemctl start docker
cd /dir1
#download ubuntu001_bkp3.tar from my online drive
aws s3 cp s3://mybucket001/ubuntu001_bkp3.tar /dir1
#restore backup
cat ./ubuntu001_bkp3.tar | docker load
docker run --name ubuntu001 -it ubuntu001_bkp3:latest bash
docker ps -a
docker attach ubuntu001
To transfer images from your local Docker installation to a minikube VM:
docker save <image> | (eval $(minikube docker-env) && docker load)
All other answers are very helpful. I just went through the same problem and figure out an easy way with docker machine scp.
Since Docker Machine v0.3.0, scp was introduced to copy files from one Docker machine to another. This is very convenient if you want copying a file from your local computer to a remote Docker machine such as AWS EC2 or Digital Ocean because Docker Machine is taking care of SSH credentials for you.
Save you images using docker save like:
docker save -o docker-images.tar app-web
Copy images using docker-machine scp
docker-machine scp ./docker-images.tar remote-machine:/home/ubuntu
Assume your remote Docker machine is remote-machine and the directory you want the tar file to be is /home/ubuntu.
Load the Docker image
docker-machine ssh remote-machine sudo docker load -i docker-images.tar
If you are working on a Windows machine and uploading to a linux machine commands such as
docker save <image> | ssh user#host docker load
will not work if you are using powershell as it seems that it adds an additional character to the output. If you run the command using cmd (Command Prompt) it will however work. As a side note you can also install gzip using Chocolatey and the following will also work from cmd.
docker save <image> | gzip | ssh user#host docker load
I want to move all images with tags.
```
OUT=$(docker images --format '{{.Repository}}:{{.Tag}}')
OUTPUT=($OUT)
docker save $(echo "${OUTPUT[*]}") -o /dir/images.tar
```
Explanation:
First OUT gets all tags but separated with new lines. Second OUTPUT gets all tags in an array. Third $(echo "${OUTPUT[*]}") puts all tags for a single docker save command so that all images are in a single tar.
Additionally, this can be zipped using gzip. On target, run:
tar xvf images.tar.gz -O | docker load
-O option to tar puts contents on stdin which can be grabbed by docker load.
Based on the #kolypto 's answer, this worked great for me but only with sudo for docker load:
docker save <image> | bzip2 | pv | ssh user#host sudo docker load
or if you don't have / don't want to install the pv:
docker save <image> | bzip2 | ssh user#host sudo docker load
No need to manually zip or similar.
You may use sshfs:
$ sshfs user#ip:/<remote-path> <local-mount-path>
$ docker save <image-id> > <local-mount-path>/myImage.tar
1. Pull an image or a repository from a registry.
docker pull [OPTIONS] NAME[:TAG|#DIGEST]
2. Save it as a .tar file.
docker save [OPTIONS] IMAGE [IMAGE...]
For example:
docker pull hello-world
docker save -o hello-world.tar hello-world
Script to perform Docker save and load function (tried and tested):
Docker Save:
#!/bin/bash
#files will be saved in the dir 'Docker_images'
mkdir Docker_images
cd Docker_images
directory=`pwd`
c=0
#save the image names in 'list.txt'
doc= docker images | awk '{print $1}' > list.txt
printf "START \n"
input="$directory/list.txt"
#Check and create the image tar for the docker images
while IFS= read -r line
do
one=`echo $line | awk '{print $1}'`
two=`echo $line | awk '{print $1}' | cut -c 1-3`
if [ "$one" != "<none>" ]; then
c=$((c+1))
printf "\n $one \n $two \n"
docker save -o $two$c'.tar' $one
printf "Docker image number $c successfully converted: $two$c \n \n"
fi
done < "$input"
Docker Load:
#!/bin/bash
cd Docker_images/
directory=`pwd`
ls | grep tar > files.txt
c=0
printf "START \n"
input="$directory/files.txt"
while IFS= read -r line
do
c=$((c+1))
printf "$c) $line \n"
docker load -i $line
printf "$c) Successfully created the Docker image $line \n \n"
done < "$input"
Related
I did a docker pull and can list the image that's downloaded. I want to see the contents of this image. Did a search on the net but no straight answer.
If the image contains a shell, you can run an interactive shell container using that image and explore whatever content that image has. If sh is not available, the busybox ash shell might be.
For instance:
docker run -it image_name sh
Or following for images with an entrypoint
docker run -it --entrypoint sh image_name
Or if you want to see how the image was built, meaning the steps in its Dockerfile, you can:
docker image history --no-trunc image_name > image_history
The steps will be logged into the image_history file.
You should not start a container just to see the image contents. For instance, you might want to look for malicious content, not run it. Use "create" instead of "run";
docker create --name="tmp_$$" image:tag
docker export tmp_$$ | tar t
docker rm tmp_$$
The accepted answer here is problematic, because there is no guarantee that an image will have any sort of interactive shell. For example, the drone/drone image contains on a single command /drone, and it has an ENTRYPOINT as well, so this will fail:
$ docker run -it drone/drone sh
FATA[0000] DRONE_HOST is not properly configured
And this will fail:
$ docker run --rm -it --entrypoint sh drone/drone
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"sh\": executable file not found in $PATH".
This is not an uncommon configuration; many minimal images contain only the binaries necessary to support the target service. Fortunately, there are mechanisms for exploring an image filesystem that do not depend on the contents of the image. The easiest is probably the docker export command, which will export a container filesystem as a tar archive. So, start a container (it does not matter if it fails or not):
$ docker run -it drone/drone sh
FATA[0000] DRONE_HOST is not properly configured
Then use docker export to export the filesystem to tar:
$ docker export $(docker ps -lq) | tar tf -
The docker ps -lq there means "give me the id of the most recent docker container". You could replace that with an explicit container name or id.
docker save nginx > nginx.tar
tar -xvf nginx.tar
Following files are present:
manifest.json – Describes filesystem layers and name of json file that has the Container properties.
.json – Container properties
– Each “layerid” directory contains json file describing layer property and filesystem associated with that layer. Docker stores Container images as layers to optimize storage space by reusing layers across images.
https://sreeninet.wordpress.com/2016/06/11/looking-inside-container-images/
OR
you can use dive to view the image content interactively with TUI
https://github.com/wagoodman/dive
EXPLORING DOCKER IMAGE!
Figure out what kind of shell is in there bash or sh...
Inspect the image first: docker inspect name-of-container-or-image
Look for entrypoint or cmd in the JSON return.
Then do: docker run --rm -it --entrypoint=/bin/bash name-of-image
once inside do: ls -lsa or any other shell command like: cd ..
The -it stands for interactive... and TTY. The --rm stands for remove container after run.
If there are no common tools like ls or bash present and you have access to the Dockerfile simple add the common tool as a layer.
example (alpine Linux):
RUN apk add --no-cache bash
And when you don't have access to the Dockerfile then just copy/extract the files from a newly created container and look through them:
docker create <image> # returns container ID the container is never started.
docker cp <container ID>:<source_path> <destination_path>
docker rm <container ID>
cd <destination_path> && ls -lsah
To list the detailed content of an image you have to run docker run --rm image/name ls -alR where --rm means remove as soon as exits form a container.
If you want to list the files in an image without starting a container :
docker create --name listfiles <image name>
docker export listfiles | tar -t
docker rm listfiles
We can try a simpler one as follows:
docker image inspect image_id
This worked in Docker version:
DockerVersion": "18.05.0-ce"
if you want to check the image contents without running it you can do this:
$ sudo bash
...
$ cd /var/lib/docker # default path in most installations
$ find . -iname a_file_inside_the_image.ext
... (will find the base path here)
This works fine with the current default BTRFS storage driver.
Oneliner, no docker run (based on responses above)
IMAGE=your_image docker create --name filelist $IMAGE command && docker export filelist | tar tf - | tree --fromfile . && docker rm filelist
Same, but report tree structure to result.txt
IMAGE=your_image docker create --name filelist $IMAGE command && docker export filelist | tar tf - | tree --noreport --fromfile . | tee result.txt && docker rm filelist
I tried this tool - https://github.com/wagoodman/dive
I found it quite helpful to explore the content of the docker image.
Perhaps this is nota very straight forward approach but this one worked for me.
I had an ECR Repo (Amazon Container Service Repository) whose code i wanted to see.
First we need to save the repo you want to access as a tar file. In my case the command went like - docker save .dkr.ecr.us-east-1.amazonaws.com/<name_of_repo>:image-tag > saved-repo.tar
UNTAR the file using the command - tar -xvf saved-repo.tar. You could see many folders and files
Now try to find the file which contain the code you are looking for (if you know some part of the code)
Command for searching the file - grep -iRl "string you want to search" ./
This will make you reach the file. It can happen that even that file is tarred, so untar it using the command mentioned in step 2.
If you dont know the code you are searching for, you will need to go through all the files that you got after step 2 and this can be bit tiring.
All the Best !
There is a free open source tool called Anchore-CLI that you can use to scan container images. This command will allow you to list all files in a container image
anchore-cli image content myrepo/app:latest files
https://anchore.com/opensource/
EDIT: not available from anchore.com anymore, It's a python program you can install from https://github.com/anchore/anchore-cli
With Docker EE for Windows (17.06.2-ee-6 on Hyper-V Server 2016) all contents of Windows Containers can be examined at C:\ProgramData\docker\windowsfilter\ path of the host OS.
No special mounting needed.
Folder prefix can be found by container id from docker ps -a output.
I did a docker pull and can list the image that's downloaded. I want to see the contents of this image. Did a search on the net but no straight answer.
If the image contains a shell, you can run an interactive shell container using that image and explore whatever content that image has. If sh is not available, the busybox ash shell might be.
For instance:
docker run -it image_name sh
Or following for images with an entrypoint
docker run -it --entrypoint sh image_name
Or if you want to see how the image was built, meaning the steps in its Dockerfile, you can:
docker image history --no-trunc image_name > image_history
The steps will be logged into the image_history file.
You should not start a container just to see the image contents. For instance, you might want to look for malicious content, not run it. Use "create" instead of "run";
docker create --name="tmp_$$" image:tag
docker export tmp_$$ | tar t
docker rm tmp_$$
The accepted answer here is problematic, because there is no guarantee that an image will have any sort of interactive shell. For example, the drone/drone image contains on a single command /drone, and it has an ENTRYPOINT as well, so this will fail:
$ docker run -it drone/drone sh
FATA[0000] DRONE_HOST is not properly configured
And this will fail:
$ docker run --rm -it --entrypoint sh drone/drone
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"sh\": executable file not found in $PATH".
This is not an uncommon configuration; many minimal images contain only the binaries necessary to support the target service. Fortunately, there are mechanisms for exploring an image filesystem that do not depend on the contents of the image. The easiest is probably the docker export command, which will export a container filesystem as a tar archive. So, start a container (it does not matter if it fails or not):
$ docker run -it drone/drone sh
FATA[0000] DRONE_HOST is not properly configured
Then use docker export to export the filesystem to tar:
$ docker export $(docker ps -lq) | tar tf -
The docker ps -lq there means "give me the id of the most recent docker container". You could replace that with an explicit container name or id.
docker save nginx > nginx.tar
tar -xvf nginx.tar
Following files are present:
manifest.json – Describes filesystem layers and name of json file that has the Container properties.
.json – Container properties
– Each “layerid” directory contains json file describing layer property and filesystem associated with that layer. Docker stores Container images as layers to optimize storage space by reusing layers across images.
https://sreeninet.wordpress.com/2016/06/11/looking-inside-container-images/
OR
you can use dive to view the image content interactively with TUI
https://github.com/wagoodman/dive
EXPLORING DOCKER IMAGE!
Figure out what kind of shell is in there bash or sh...
Inspect the image first: docker inspect name-of-container-or-image
Look for entrypoint or cmd in the JSON return.
Then do: docker run --rm -it --entrypoint=/bin/bash name-of-image
once inside do: ls -lsa or any other shell command like: cd ..
The -it stands for interactive... and TTY. The --rm stands for remove container after run.
If there are no common tools like ls or bash present and you have access to the Dockerfile simple add the common tool as a layer.
example (alpine Linux):
RUN apk add --no-cache bash
And when you don't have access to the Dockerfile then just copy/extract the files from a newly created container and look through them:
docker create <image> # returns container ID the container is never started.
docker cp <container ID>:<source_path> <destination_path>
docker rm <container ID>
cd <destination_path> && ls -lsah
To list the detailed content of an image you have to run docker run --rm image/name ls -alR where --rm means remove as soon as exits form a container.
If you want to list the files in an image without starting a container :
docker create --name listfiles <image name>
docker export listfiles | tar -t
docker rm listfiles
We can try a simpler one as follows:
docker image inspect image_id
This worked in Docker version:
DockerVersion": "18.05.0-ce"
if you want to check the image contents without running it you can do this:
$ sudo bash
...
$ cd /var/lib/docker # default path in most installations
$ find . -iname a_file_inside_the_image.ext
... (will find the base path here)
This works fine with the current default BTRFS storage driver.
Oneliner, no docker run (based on responses above)
IMAGE=your_image docker create --name filelist $IMAGE command && docker export filelist | tar tf - | tree --fromfile . && docker rm filelist
Same, but report tree structure to result.txt
IMAGE=your_image docker create --name filelist $IMAGE command && docker export filelist | tar tf - | tree --noreport --fromfile . | tee result.txt && docker rm filelist
I tried this tool - https://github.com/wagoodman/dive
I found it quite helpful to explore the content of the docker image.
Perhaps this is nota very straight forward approach but this one worked for me.
I had an ECR Repo (Amazon Container Service Repository) whose code i wanted to see.
First we need to save the repo you want to access as a tar file. In my case the command went like - docker save .dkr.ecr.us-east-1.amazonaws.com/<name_of_repo>:image-tag > saved-repo.tar
UNTAR the file using the command - tar -xvf saved-repo.tar. You could see many folders and files
Now try to find the file which contain the code you are looking for (if you know some part of the code)
Command for searching the file - grep -iRl "string you want to search" ./
This will make you reach the file. It can happen that even that file is tarred, so untar it using the command mentioned in step 2.
If you dont know the code you are searching for, you will need to go through all the files that you got after step 2 and this can be bit tiring.
All the Best !
There is a free open source tool called Anchore-CLI that you can use to scan container images. This command will allow you to list all files in a container image
anchore-cli image content myrepo/app:latest files
https://anchore.com/opensource/
EDIT: not available from anchore.com anymore, It's a python program you can install from https://github.com/anchore/anchore-cli
With Docker EE for Windows (17.06.2-ee-6 on Hyper-V Server 2016) all contents of Windows Containers can be examined at C:\ProgramData\docker\windowsfilter\ path of the host OS.
No special mounting needed.
Folder prefix can be found by container id from docker ps -a output.
I'm running a docker-machine to create a local docker host on OS X 10.10 Yosemite.
This following script below creates the host, then logins in remotely to run a second script which installs images which have previously been saved to local compressed files.
#!/bin/bash
docker-machine create --driver virtualbox docker-stack
docker-machine docker-machine docker-stack-local
stackip="$(docker-machine ip docker-stack-local)"
sshpass -p "tcuser" scp -v -r docker_images/ docker#"${stackip}":/home/docker/
cat ssh_after.sh | docker-machine ssh ec2-01 sh
The ssh_after.sh script calls installation of the desired docker images:
#!/bin/bash
cd /home/docker/test
gunzip -c anaconda_bak.tgz | docker load
gunzip -c r-base_bak.tgz | docker load
gunzip -c git_bak.tgz | docker load
gunzip -c mariadb.tgz | docker load
Unfortunately, the virtual machine volume created is too small to transfer over the files (> 1 GB).
It is a known issue:
http://docs.docker.com/articles/b2d_volume_resize/
However, the solution involves calling GUI tools and is not suitable for scripting purposes.
I would like to find a way to either:
resize the image afterward using a bash script, or
find a way to pipe the stdout stream of the compressed files through ssh to pass to docker load
Note, a modified script works fine for other docker-machine drivers, including AWS EC2 instances.
The solution must involve installing local docker images, not pulling from a repository of any kind.
To avoid confusion, this is about getting image files into the docker host, not mounting files into a docker container instance.
You should be able to use docker-machine create --virtualbox-disk-size "35000". Add the disk size option to your build script.
I have an image to be updated with following command before each deployment.
$docker pull myusername/myproject:latest
This command overwrites the previous image.
How can I backup this image (or change it to a different tag locally without committing to networking repository? If there is anything wrong, I can restore the backup.
How can I backup this image
Simply use the docker save command. $ docker save myusername/myproject:latest | gzip -c > myproject_img_bak20141103.tgz
You will later be able to restore it with the docker load command. gunzip -c myproject_img_bak20141103.tgz | docker load
or change it to a different tag locally without committing to networking repository?
Use the docker tag command: $ docker tag myusername/myproject:latest myusername/myproject:bak20141103
For completeness: For Docker on Windows the following syntax applies:
docker save -o container-file-name.tar mcr.microsoft.com/windows/nanoserver:1809
docker load -i "c:\path\to\file.tar"
Example.
#environment: bash shell
#id container: 1d09068efad1
#name backup: backup01
docker commit -p 1d09068efad1 backup01
docker save -o backup01.tar backup01
https://www.baculasystems.com/blog/docker-backup-containers/
https://www.geeksforgeeks.org/backing-up-a-docker-container/
How do I transfer a Docker image from one machine to another one without using a repository, no matter private or public?
I create my own image in VirtualBox, and when it is finished I try to deploy to other machines to have real usage.
Since it is based on my own based image (like Red Hat Linux), it cannot be recreated from a Dockerfile. My dockerfile isn't easily portable.
Are there simple commands I can use? Or another solution?
You will need to save the Docker image as a tar file:
docker save -o <path for generated tar file> <image name>
Then copy your image to a new system with regular file transfer tools such as cp, scp or rsync(preferred for big files). After that you will have to load the image into Docker:
docker load -i <path to image tar file>
PS: You may need to sudo all commands.
EDIT:
You should add filename (not just directory) with -o, for example:
docker save -o c:/myfile.tar centos:16
Transferring a Docker image via SSH, bzipping the content on the fly:
docker save <image> | bzip2 | ssh user#host docker load
Note that docker load automatically decompresses images for you. It supports gzip, bzip2 and xz.
It's also a good idea to put pv in the middle of the pipe to see how the transfer is going:
docker save <image> | bzip2 | pv | ssh user#host docker load
(More info about pv: home page, man page).
Important note from #Thomas Steinbach: on high bandwidth, bzip won't be able to compress fast enough. In case you can upload at 10 MB/s and more, gzip/gunzip will be much faster than bzip2.
If you're on 3G and your Internet is slow, #jgmjgm suggests that you can use xz: it offers a higher compression ratio.
To save an image to any file path or shared NFS place see the following example.
Get the image id by doing:
docker images
Say you have an image with id "matrix-data".
Save the image with id:
docker save -o /home/matrix/matrix-data.tar matrix-data
Copy the image from the path to any host. Now import to your local Docker installation using:
docker load -i <path to copied image file>
You can use a one-liner with DOCKER_HOST variable:
docker save app:1.0 | gzip | DOCKER_HOST=ssh://user#remotehost docker load
First save the Docker image to a compressed archive:
docker save <docker image name> | gzip > <docker image name>.tar.gz
Then load the exported image to Docker using the below command:
zcat <docker image name>.tar.gz | docker load
Run
docker images
to see a list of the images on the host. Let's say you have an image called awesomesauce. In your terminal, cd to the directory where you want to export the image to. Now run:
docker save awesomesauce:latest > awesomesauce.tar
Copy the tar file to a thumb drive or whatever, and then copy it to the new host computer.
Now from the new host do:
docker load < awesomesauce.tar
Now go have a coffee and read Hacker News...
The fastest way to save and load docker image through gzip command:
docker save <image_id> | gzip > image_file.tgz
To load your zipped image on another server use immediate this command, it will be recognized as zipped image:
docker load -i image_file.tgz
to rename, or re-tag the image use:
docker image tag <image_id> <image_path_name>:<version>
for example:
docker image tag 4444444 your_docker_or_harbor_path/ubuntu:14.0
For a flattened export of a container's filesystem, use;
docker export CONTAINER_ID > my_container.tar
Use cat my_container.tar | docker import - to import said image.
docker-push-ssh is a command line utility I created just for this scenario.
It sets up a temporary private Docker registry on the server, establishes an SSH tunnel from your localhost, pushes your image, then cleans up after itself.
The benefit of this approach over docker save (at the time of writing most answers are using this method) is that only the new layers are pushed to the server, resulting in a MUCH quicker upload.
Oftentimes using an intermediate registry like dockerhub is undesirable, and cumbersome.
https://github.com/brthor/docker-push-ssh
Install:
pip install docker-push-ssh
Example:
docker-push-ssh -i ~/my_ssh_key username#myserver.com my-docker-image
The biggest caveat is that you have to manually add your localhost to Docker's insecure_registries configuration. Run the tool once and it will give you an informative error:
Error Pushing Image: Ensure localhost:5000 is added to your insecure registries.
More Details (OS X): https://stackoverflow.com/questions/32808215/where-to-set-the-insecure-registry-flag-on-mac-os
Where should I set the '--insecure-registry' flag on Mac OS?
When using docker-machine, you can copy images between machines mach1 and mach2 with:
docker $(docker-machine config <mach1>) save <image> | docker $(docker-machine config <mach2>) load
And of course you can also stick pv in the middle to get a progess indicator:
docker $(docker-machine config <mach1>) save <image> | pv | docker $(docker-machine config <mach2>) load
You may also omit one of the docker-machine config sub-shells, to use your current default docker-host.
docker save <image> | docker $(docker-machine config <mach>) load
to copy image from current docker-host to mach
or
docker $(docker-machine config <mach>) save <image> | docker load
to copy from mach to current docker-host.
I assume you need to save couchdb-cartridge which has an image id of 7ebc8510bc2c:
stratos#Dev-PC:~$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
couchdb-cartridge latest 7ebc8510bc2c 17 hours ago 1.102 GB
192.168.57.30:5042/couchdb-cartridge latest 7ebc8510bc2c 17 hours ago 1.102 GB
ubuntu 14.04 53bf7a53e890 3 days ago 221.3 MB
Save the archiveName image to a tar file. I will use the /media/sf_docker_vm/ to save the image.
stratos#Dev-PC:~$ docker save imageID > /media/sf_docker_vm/archiveName.tar
Copy the archiveName.tar file to your new Docker instance using whatever method works in your environment, for example FTP, SCP, etc.
Run the docker load command on your new Docker instance and specify the location of the image tar file.
stratos#Dev-PC:~$ docker load < /media/sf_docker_vm/archiveName.tar
Finally, run the docker images command to check that the image is now available.
stratos#Dev-PC:~$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
couchdb-cartridge latest 7ebc8510bc2c 17 hours ago 1.102 GB
192.168.57.30:5042/couchdb-cartridge latest bc8510bc2c 17 hours ago 1.102 GB
ubuntu 14.04 4d2eab1c0b9a 3 days ago 221.3 MB
Please find this detailed post.
The best way to save all the images is like this :
docker save $(docker images --format '{{.Repository}}:{{.Tag}}') -o allimages.tar
Above code will save all the images in allimages.tar and to load the images go to the directory where you saved the images and run this command :
docker load -i allimages.tar
Just make sure to use this commands in PowerShell and not in Commad Prompt
REAL WORLD EXAMPLE
#host1
systemctl stop docker
systemctl start docker
docker commit -p 1d09068ef111 ubuntu001_bkp3
#create backup
docker save -o ubuntu001_bkp3.tar ubuntu001_bkp3
#upload ubuntu001_bkp3.tar to my online drive
aws s3 cp ubuntu001_bkp3.tar s3://mybucket001/
#host2
systemctl stop docker
systemctl start docker
cd /dir1
#download ubuntu001_bkp3.tar from my online drive
aws s3 cp s3://mybucket001/ubuntu001_bkp3.tar /dir1
#restore backup
cat ./ubuntu001_bkp3.tar | docker load
docker run --name ubuntu001 -it ubuntu001_bkp3:latest bash
docker ps -a
docker attach ubuntu001
To transfer images from your local Docker installation to a minikube VM:
docker save <image> | (eval $(minikube docker-env) && docker load)
All other answers are very helpful. I just went through the same problem and figure out an easy way with docker machine scp.
Since Docker Machine v0.3.0, scp was introduced to copy files from one Docker machine to another. This is very convenient if you want copying a file from your local computer to a remote Docker machine such as AWS EC2 or Digital Ocean because Docker Machine is taking care of SSH credentials for you.
Save you images using docker save like:
docker save -o docker-images.tar app-web
Copy images using docker-machine scp
docker-machine scp ./docker-images.tar remote-machine:/home/ubuntu
Assume your remote Docker machine is remote-machine and the directory you want the tar file to be is /home/ubuntu.
Load the Docker image
docker-machine ssh remote-machine sudo docker load -i docker-images.tar
If you are working on a Windows machine and uploading to a linux machine commands such as
docker save <image> | ssh user#host docker load
will not work if you are using powershell as it seems that it adds an additional character to the output. If you run the command using cmd (Command Prompt) it will however work. As a side note you can also install gzip using Chocolatey and the following will also work from cmd.
docker save <image> | gzip | ssh user#host docker load
Based on the #kolypto 's answer, this worked great for me but only with sudo for docker load:
docker save <image> | bzip2 | pv | ssh user#host sudo docker load
or if you don't have / don't want to install the pv:
docker save <image> | bzip2 | ssh user#host sudo docker load
No need to manually zip or similar.
I want to move all images with tags.
```
OUT=$(docker images --format '{{.Repository}}:{{.Tag}}')
OUTPUT=($OUT)
docker save $(echo "${OUTPUT[*]}") -o /dir/images.tar
```
Explanation:
First OUT gets all tags but separated with new lines. Second OUTPUT gets all tags in an array. Third $(echo "${OUTPUT[*]}") puts all tags for a single docker save command so that all images are in a single tar.
Additionally, this can be zipped using gzip. On target, run:
tar xvf images.tar.gz -O | docker load
-O option to tar puts contents on stdin which can be grabbed by docker load.
You may use sshfs:
$ sshfs user#ip:/<remote-path> <local-mount-path>
$ docker save <image-id> > <local-mount-path>/myImage.tar
1. Pull an image or a repository from a registry.
docker pull [OPTIONS] NAME[:TAG|#DIGEST]
2. Save it as a .tar file.
docker save [OPTIONS] IMAGE [IMAGE...]
For example:
docker pull hello-world
docker save -o hello-world.tar hello-world
Script to perform Docker save and load function (tried and tested):
Docker Save:
#!/bin/bash
#files will be saved in the dir 'Docker_images'
mkdir Docker_images
cd Docker_images
directory=`pwd`
c=0
#save the image names in 'list.txt'
doc= docker images | awk '{print $1}' > list.txt
printf "START \n"
input="$directory/list.txt"
#Check and create the image tar for the docker images
while IFS= read -r line
do
one=`echo $line | awk '{print $1}'`
two=`echo $line | awk '{print $1}' | cut -c 1-3`
if [ "$one" != "<none>" ]; then
c=$((c+1))
printf "\n $one \n $two \n"
docker save -o $two$c'.tar' $one
printf "Docker image number $c successfully converted: $two$c \n \n"
fi
done < "$input"
Docker Load:
#!/bin/bash
cd Docker_images/
directory=`pwd`
ls | grep tar > files.txt
c=0
printf "START \n"
input="$directory/files.txt"
while IFS= read -r line
do
c=$((c+1))
printf "$c) $line \n"
docker load -i $line
printf "$c) Successfully created the Docker image $line \n \n"
done < "$input"