Docker commit doesn't save changes - save

Yeah, you're right, they are many topics like that. I didn't find a solution for my problem. So give me a chance!
I run a docker container with no defined volumes. So what I want is to commit changes like:
docker commit 3a09b2588478 myfantasticimage
docker save myfantasticimage > /tmp/fantasticimagecommit.tar
Now I transfer the image via scp to another docker-host an do
docker load < /tmp/fantasticimagecommit.tar
Starting image and I can't see change I do before commited it.
What's the problem. According to the Dockerfile, no volumes are defined.
Thanks!
Update: I've found volumes via docker inspect-command
"VolumesRW": {
"/var/lib/": true,
"/var/log/": true,
"/var/www/": true
}
What could be a workaround? I want do back up every 6 hours a container, so I can restore it on the same or another machine without expended effort.

"docker commit" cannot save mount volumes' data ~
You should docker cp files to the container ~

For Saving and Loading Docker Images in another machine without going through docker hub
use below commands
Let say you have an app.tar file then for saving it with app and tag number. Use below commands
docker image save -o app.tar app:3
For Loading it use below command
docker image load -i app.tar

Related

Updating a docker image without the original Dockerfile

I am working on Flask app running on ec2 server inside a docker image.
The old dev seems to have removed the original Dockerfile, and I can't seem to find any instructions on a way to push my changes into to the docker image with out the original.
I can copy my changes manually using:
docker cp newChanges.py doc:/root/doc/server_python/
but I can't seem to find a way to restart flask. I know this is not the ideal solution but it's the only idea I have.
There is one way to add newChanges.py to existing image and commit that image with a new tag so you will have a fall back option if you face any issue.
Suppose you run alpine official image and you don't have DockerFile
Everytime you restart the image you will not have your newChanges.py
docker run --rm -name alpine alpine
Use ls inside the image to see a list of existing files that are created in Dockerfile.
docker cp newChanges.py alpine:/
Run ls and verify your file was copied over
Next Step
To commit these changes to your running container do the following:
Docker ps
Get the container ID and run:
docker commit 4efdd58eea8a updated_alpine_image
Now run your alpine image and you will the see the updated changes as suppose
docker run -it updated_alpine_image
This is what you will see in your update_alpine_image with having DockerFile
This is how you can rebuild the image from existing image. You can also try #uncletall answer as well.
If you just want to restart after docker cp, you can just docker stop $your_container, then docker start $your_container.
If you want to update newChanges.py to docker image without original Dockerfile, you can use docker export -o $your_tar_name.tar $your_container, then docker import $your_tar_name.tar $your_new_image:tag. Later, always reserve the tar to backup server for future use.
If you want continue to develop later use a Dockerfile in the future for further changes:
you can use docker commit to generate a new image, and use docker push to push it to dockerhub with the name something like my_docker_id/my_image_name:v1.0
Your new Dockerfile:
FROM my_docker_id/my_image_name:v1.0
# your new thing here
ADD another_new_change.py /root/
# others
You can try to examine the history of the image, from there you can probably re-create the Dockerfile. Try using docker history --no-trunc image-name
See this answer for more details

docker does not preserve state

I made a docker pull jenkins:latest
then I ran the container: docker run --name jenk -p 8080:8080 jenkins
I set up all the jobs, configurations, etc within jenkins. Afterwards I committed the change:
docker commit jenk myrepo/jenkins
when I now pull the image and start it: docker run myrepo/jenkins all the configuration is lost. I thought it would preserve it.
You also need to push it to your (remote) repository before you can pull it again. The commit only saves the state to your local drive. A pull always goes to a repository.
Some free advice:
It is mostly advisable to make changes by doing this through a Dockerfile though, by extending the jenkins:latest and adding your own changes to it. This makes it much more maintainable and changeable.
Question:
Did you do this all inside the image or also on mounted volumes?
according to the documentation those settings will not be included
The commit operation will not include any data contained in volumes mounted inside the container.
Have fun :-)
As described in the docker commit documentation:
The commit operation will not include any data contained in volumes
mounted inside the container.
The jenkins image declared the jenkins home as a volume VOLUME /var/jenkins_home. The volume container all the configuration and jobs created. Thus when you commit the container, all this configuration willnot be persisted in the
commited image.
If you are running the new image on the same machine, you can use the jenkins_home volume from the older container and get exactly the same jenkins instance:
docker volume ls //To determine the old container volume name
docker run -v <old-volume-name>:/var/jenkins_home -p 8080:8080 myrepo/jenkins
If you are running the commited intance on a new machine:
docker cp <old-container>:/var/jenkins_home ./jenkins_home
Now copy the jenkins_home folder onto the new machine, and mount it onto the new container:
docker run -v ./jenkins_home:/var/jenkins_home -p 8080:8080 myrepo/jenkins

Cached Docker image?

I created my own image and pushed it to my repo on docker hub. I deleted all the images on my local box with docker rmi -f ...... Now docker images shows an empty list.
But when I do docker run xxxx/yyyy:zzzz it doesn't pull from my remote repo and starts a container right away.
Is there any cache or something else? If so, what is the way to clean it all?
Thank you
I know this is old now but thought I'd share still.
Docker will contain all those old images in a Cache unless you specifically build them with --no-cache, to clear the cache down you can simply run docker system prune -a -f and it should clear everything down including the cache.
Note: this will clear everything down including containers.
You forced removal of the image with -f. Since you used -f I'm assuming that the normal rmi failed because containers based on that image already existed. What this does is just untag the image. The data still exists as a diff for the container.
If you do a docker ps -a you should see containers based on that image. If you start more containers based on that same previous ID, the image diff still exists so you don't need to download anything. But once you remove all those containers, the diff will disappear and the image will be gone.

How to move docker installation to another machine?

I know about /var/lib/docker but is mounting this directory on another machine enough to recover the docker functionality on the original machine? I tried this between different CoreOS instances but when issued docker image the images did not appear even though they were in the /var/lib/docker directory. Am I missing some other data that should be transferred?
The end goal is to have a portable 'repo' of images that I can build on from any machine.
related Where are Docker images stored on the host machine?
docker export, scp from machine A to machine B, and docker import should work well for you.
I think in order for you to transfer docker images like this they have first be compressed as tar's.
for the above query,if i am not wrong, you want to transfer images(all images) to a remote machine.
An easy way for this approach is creating a registry on the second machine(say machine B) and push all images from the main machine(machine A).
However i suspect that there is some permission problem with the local mount point which you are referring.I suggest you to first check out with chmod 777 command on the localmount point.Then if it works you can give access with restricted permissions.
Similarly, I have not tried mounting on other machine /var/lib/docker but incase if it had to work you should give permission and it should be owned by docker group.
Let us know if it is the permission issue that you faced.
good luck
So in my solution I use both a private docker registry and a 'shared' /var/lib/docker that I mount between my (ephemeral) instances/build machines. I intend to use the registry to distribute images to machines that wont be building. Sharing the docker dir helps with keeping the build time down. I have the following steps for each dockerfile.
docker pull $REGISTRY_HOST/$name
docker build -t $name $itsdir
echo loading into registry \
$REGISTRY_HOST/$name
#assuming repos in 'root' ( library/ )
docker rmi $REGISTRY_HOST/$name
docker tag $name $REGISTRY_HOST/$name
docker push $REGISTRY_HOST/$name
docker rmi $REGISTRY_HOST/$name
I think this works.

In Docker, how can I share files between containers and then save them to an image?

I want to commit the data in a container's shared volume to an image. I cannot seem to do it? I kind of get the impression this perhaps is not possible in Docker but that seems totally at odds with the whole philosophy of not leaving data on the host so part of me thinks there must be a way to do this.
1. Terminal 1
Start up a container in Terminal 1 with a volume.
$ docker run -it -v /data ubuntu:14.10 /bin/bash
root#19fead4f6a68:/# echo "Hello Docker Volumes." > /data/foo.txt
2. Terminal 2
Start up a second container in Terminal 2 the file from container 1 is there so docker volumes are all working.
$ docker run -it --volumes-from 19fead4f6a68 ubuntu:14.10 /bin/bash
root#5c7cdbfc67d8:/# cat /data/foo.txt
Hello Docker Volumes.
3. Terminal 3
My understanding is that I can only commit diffs to images so I check what the diffs are on both the containers. For some bizarre reason my changes do not show up!??
$ docker diff 19fead4f6a68
A /data
$ docker diff 5c7cdbfc67d8
A /data
4. Back in Terminal 1
I create a file outside of the volume folder
root#19fead4f6a68:/# echo "Docker you are a very strange beast...." > /var/beast.txt
5. Back in Terminal 3
We now have some changes we can commit although I am rather frustrated as this is not the data from the volume I needed to share with my other container.
$ docker diff 19fead4f6a68
A /data
C /var
A /var/beast.txt
Clearly this is by design. Anyone have any ideas as to why docker don't allow me to save volume data to a commit? Is there anyway at all to share files between containers and then save them to an image? I feel like there must be something I am missing? Especially to the ends of sharing data whilst avoiding host dependencies.
Volumes are outside of container images. That's exactly what they are for - bringing data inside a container that isn't in the image.
From the Docker docs:
A data volume is a specially-designated directory within one or more containers that bypasses the Union File System to provide several useful features for persistent or shared data:
Data volumes can be shared and reused between containers
Changes to a data volume are made directly
Changes to a data volume will not be included when you update an image
If you want to save some changes as part of an image, make the changes inside the image and not in a volume. If you want to share changes across multiple containers, put that data in a volume but you have to make your own arrangements for snapshots, rollback, etc., because Docker doesn't have that feature.
Maybe you would be interested in Flocker.
It looks as though there is an open issue around adding volume layers to docker:
https://github.com/docker/docker/issues/9382

Resources