Moving docker container to new server - docker

I have upgraded to a new server and am trying to migrate my Docker containers over.
Most of the containers that I am running are made up of multiple image files
I used the docker commit appID appname command to create my own images of each
and then saved all of the images to a .tar file using
docker save image1 image2 image3 > backup.tar
Then transfered the tar file to my new server and ran
docker load -i backup.tar
Which added the backup images as wel as associated volumes on to my new server...
The problem I now have is, there are 7 image files and I cannot find a way to create the docker container using these image files.
When I use the YAML file and change the image to represent the locally stored image rather than the image from the docker repository, it still pulls the image from the repository
Is there a recommended way to launch the container from the local images exported from the tar file?

Maybe you can use something like this:
docker save -o backup.tar $(docker images --format "{{.Repository}}:{{.Tag}}")
Alternatively:
docker save $(docker images --format "{{.Repository}}:{{.Tag}}") > backup.tar
This will tag your images with the name and tags.
Once you do
docker load -i backup.tar
and perform:
docker images -a
you will be able to use the images based on the name:tag

Related

How to load updated docker image onto other machine

I have 2 hosts running the same docker customized image. I have modified the image on host 1 and saved the image to a custom.tar. If I take that image and load it onto host 2 will it just update or should I remove the old docker image first?
There are 2 ways to do that with repository and without repository using load and save.
With repository below are the steps.
Log in on Docker Hub
Click on Create Repository.
Choose a name and a description for your repository and click
Create.
Log into the Docker Hub from the command line
docker login --username=yourhubusername --email=youremail#company.com
tag your image
docker tag <existing-image> <hub-user>/<repo-name>[:<tag>]
Push your image to the repository you created
docker push <hub-user>/<repo-name>:<tag>
Pull the image to host 2
docker pull <hub-user>/<repo-name>:<tag>
This will add the image to docker hub and available on internet and now you can pull this image to any system.
With this approach you can keep the same images with different tags on system. But if you don't need old images better to delete that to avoid junk.
Without docker hub.
This command will create tar bundle.
docker save [OPTIONS] IMAGE [IMAGE...]
example: docker save busybox > busybox.tar
Load an image from a tar archive or STDIN
docker load [OPTIONS]
example:docker load < busybox.tar.gz
Recommended: Docker hub or DTR approach easy to manage unless you have bandwidth issue in case your file is large.
Refer:
Docker Hub Repositories

How to make docker reset the image on remote server?

On docker build from jenkins 1st docker images are created tagged properly as LATEST on remote server.
On rebuild it supposed to overwrite docker image on server. But it didn't.
Virtually it creates new docker images with no-tag repository and no-tag tag. And avoid using predefined domain name for image.
So creates a new because it's supposed create NEW image because they are totally different.
Is there any way to avoid just deleting image directly from the remote server? But to update image docker on same tag(domain) name?
Any ideas on workaround? How to avoid making new docker images with static unchanged TAG name from jenkins build.
Because it eats a lot of memory space on the moment when run on cron.
How can I overwrite docker image or make a facade that it's overwrited?
You can delete no-tag images after build image from jenkins by: docker rmi $(docker images -f "dangling=true" -q)

Docker save/load lose original image repository/name/tag

I'm using Docker 1.12.6.
I have pulled an image from the Docker registry.
I have exported the images as tar files using the docker save command.
I removed the original image and container and loaded the exported image using docker load -i myImage.tar.
Now, when running docker images I notice my image have lost its repository/tag information:
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 5fae4d4b9e02 8 weeks ago 581.3 MB
Why does it have this behavior and how do I keep the original image name?
Use
docker save -o filename.tar <repo>:<tag>
The command docker save <image id> removes the repository and tag names.
To solve this, use docker save <repo>:<tag> it will keep the repository and tag name in the saved file. For example:
docker save -o ubutu-18.04.tar ubuntu:18.04
I had the same problem, so I used the following command to fix it manually:
docker tag <Image-ID> <desired Name:Tag>
Reference
[NOTE]:
It's inconsistent: docker save image-repo-name -> docker load
restores name, docker save SHA -> docker load no names or tags,
docker save name:latest -> docker load no names or tags.
AND:
The current (and correct) behavior is as follows:
docker save repo
Saves all tagged images + parents in the repo, and creates a
repositories file listing the tags
docker save repo:tag
Saves tagged image + parents in repo, and creates a repositories file
listing the tag
docker save imageid
Saves image + parents, does not create repositories file. The save
relates to the image only, and tags are left out by design and left as
an exercise for the user to populate based on their own naming
convention.
Reference
A single image ID can have multiple names/tags,
so the way that you loose the the names and tags is
what I would expect to happen after saving and loading the image to/from a tar ball.
Mode details are in the discussion about it here
From docker documentation:
cat exampleimage.tgz | docker import - exampleimagelocal:new
root#mymachine:/tmp# cat myimage.tar | docker import --message "New image imported from tarball" - reponame:my-image-name
sha256:be0794427222dcb81182a59c4664b350ecb5ffb7b37928d52d72b31
root#mymachine:/tmp# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
reponame my-image-name be0794427222 6 seconds ago 4.31GB
This one worked for me.
This is a work around
Go to source docker host machine, create text file containing all the image details using the following command docker image ls > images.txt
The above command will produce a text file similar to the following
REPOSITORY TAG IMAGE ID CREATED SIZE <none> <none> 293e4ed402ba 2 weeks ago 315MB <none> <none> d8e4b0afd6ba 2 weeks ago 551MB
Make necessary edits to set the tag by using docker image tag command
docker image tag 293e4ed402ba postgres:latest
docker image tag d8e4b0afd6ba wordpress:latest
I wrote a one-line script that importing a bunch of .tar files and immediately tagging the image.
for image in $(ls); do docker tag "$(echo $(docker import $image))" $image ; done
Note, that you should be inside the folder when all the tar files are located.

Backup of Docker image

Is it possible to keep backup of docker images?
Here is my docker image:
my#onl-dev:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
centdemo latest e0b0d89f6c45 2 days ago 3.322 GB
I have some data in this docker image which I copy from container to host whenever needed.
root#onl-dev:/distros/centos6.6.x86_64# docker cp 9512b894d107:/distros/centos6.6.x86_64 /data/sept15/mess
So I want to keep backup of this docker image such that I can restore it whenever I want to copy the data from container to host.
You can simply tag the image with different tag
docker tag <image-name>:latest <image-name>:backup
Nonetheless you can always use the image ID: e0b0d89f6c45
Another option is to store it in a tar.
docker save mynewimage > /tmp/mynewimage.tar

How to move Docker containers between different hosts?

I cannot find a way of moving docker running containers from one host to another.
Is there any way I can push my containers to repositories like we do for images ?
Currently, I am not using data volumes to store the data associated with applications running inside containers. So some data resides inside containers, which I want to persist before redesigning the setup.
Alternatively, if you do not wish to push to a repository:
Export the container to a tarball
docker export <CONTAINER ID> > /home/export.tar
Move your tarball to new machine
Import it back
cat /home/export.tar | docker import - some-name:latest
You cannot move a running docker container from one host to another.
You can commit the changes in your container to an image with docker commit, move the image onto a new host, and then start a new container with docker run. This will preserve any data that your application has created inside the container.
Nb: It does not preserve data that is stored inside volumes; you need to move data volumes manually to new host.
What eventually worked for me, after lot's of confusing manuals and confusing tutorials, since Docker is obviously at time of my writing at peak of inflated expectations, is:
Save the docker image into archive:
docker save image_name > image_name.tar
copy on another machine
on that other docker machine, run docker load in a following way:
cat image_name.tar | docker load
Export and import, as proposed in another answers does not export ports and variables, which might be required for your container to run. And you might end up with stuff like "No command specified" etc... When you try to load it on another machine.
So, difference between save and export is that save command saves whole image with history and metadata, while export command exports only files structure (without history or metadata).
Needless to say is that, if you already have those ports taken on the docker hyper-visor you are doing import, by some other docker container, you will end-up in conflict, and you will have to reconfigure exposed ports.
Note: In order to move data with docker, you might be having persistent storage somewhere, which should also be moved alongside with containers.
Use this script:
https://github.com/ricardobranco777/docker-volumes.sh
This does preserve data in volumes.
Example usage:
# Stop the container
docker stop $CONTAINER
# Create a new image
docker commit $CONTAINER $CONTAINER
# Save image
docker save -o $CONTAINER.tar $CONTAINER
# Save the volumes (use ".tar.gz" if you want compression)
docker-volumes.sh $CONTAINER save $CONTAINER-volumes.tar
# Copy image and volumes to another host
scp $CONTAINER.tar $CONTAINER-volumes.tar $USER#$HOST:
# On the other host:
docker load -i $CONTAINER.tar
docker create --name $CONTAINER [<PREVIOUS CONTAINER OPTIONS>] $CONTAINER
# Load the volumes
docker-volumes.sh $CONTAINER load $CONTAINER-volumes.tar
# Start container
docker start $CONTAINER
From Docker documentation:
docker export does not export the contents of volumes associated
with the container. If a volume is mounted on top of an existing
directory in the container, docker export will export the contents
of the underlying directory, not the contents of the volume. Refer
to Backup, restore, or migrate data
volumes
in the user guide for examples on exporting data in a volume.
I tried many solutions for this, and this is the one that worked for me :
1.commit/save container to new image :
++ commit the container:
# docker stop
# docker commit CONTAINER_NAME
# docker save --output IMAGE_NAME.tar IMAGE_NAME:TAG
ps:"Our container CONTAINER_NAME has a mounted volume at '/var/home'" ( you have to inspect your container to specify its volume path : # docker inspect CONTAINER_NAME )
++ save its volume : we will use an ubuntu image to do the thing.
# mkdir backup
# docker run --rm --volumes-from CONTAINER_NAME -v ${pwd}/backup:/backup ubuntu
bash -c “cd /var/home && tar cvf /backup/volume_backup.tar .”
Now when you look at ${pwd}/backup , you will find our volume under tar format.
Until now, we have our conatainer's image 'IMAGE_NAME.tar' and its volume 'volume_backup.tar'.
Now you can , recreate the same old container on a new host.
docker export | gzip > .tar.gz
#new host
gunzip < /mnt/usb/.tar.gz | docker import -
docker run -i -p 80:80 /bin/bash

Resources