I made a docker pull jenkins:latest
then I ran the container: docker run --name jenk -p 8080:8080 jenkins
I set up all the jobs, configurations, etc within jenkins. Afterwards I committed the change:
docker commit jenk myrepo/jenkins
when I now pull the image and start it: docker run myrepo/jenkins all the configuration is lost. I thought it would preserve it.
You also need to push it to your (remote) repository before you can pull it again. The commit only saves the state to your local drive. A pull always goes to a repository.
Some free advice:
It is mostly advisable to make changes by doing this through a Dockerfile though, by extending the jenkins:latest and adding your own changes to it. This makes it much more maintainable and changeable.
Question:
Did you do this all inside the image or also on mounted volumes?
according to the documentation those settings will not be included
The commit operation will not include any data contained in volumes mounted inside the container.
Have fun :-)
As described in the docker commit documentation:
The commit operation will not include any data contained in volumes
mounted inside the container.
The jenkins image declared the jenkins home as a volume VOLUME /var/jenkins_home. The volume container all the configuration and jobs created. Thus when you commit the container, all this configuration willnot be persisted in the
commited image.
If you are running the new image on the same machine, you can use the jenkins_home volume from the older container and get exactly the same jenkins instance:
docker volume ls //To determine the old container volume name
docker run -v <old-volume-name>:/var/jenkins_home -p 8080:8080 myrepo/jenkins
If you are running the commited intance on a new machine:
docker cp <old-container>:/var/jenkins_home ./jenkins_home
Now copy the jenkins_home folder onto the new machine, and mount it onto the new container:
docker run -v ./jenkins_home:/var/jenkins_home -p 8080:8080 myrepo/jenkins
Related
I'm working on a project using NodeRed deployed with docker and I would like to save the state of my deployment, including flows, settings and new added modules so that I can save the image and load it on another host replicating exactly the same NodeRed instance.
I created the container using:
docker run -itd --name my-nodered node-red
After implementing the flows and installing some custom modules, with the container running I used this command:
docker commit my-nodered my-project-nodered/my-nodered:version1
docker save my-project-nodered/my-nodered:version1 > tar-archive.tar.gz
And on another machine I'd imported the image using:
docker load < tar-archive.tar.gz
And run it using:
docker run -itd my-project-nodered/my-nodered:version1
And I obtain a vanilla NodeRed docker container with a default /data directory and just the files on the data directory maintained.
What am I missing? It could be possibile that my /data directory is overwrittenm as well as my settings.js file in the home directory? And in this case, which is the best practice to achieve my target?
Thank you a lot in advance
commit will not work, as you can see that there is volume defined in the Dockerfile.
# User configuration directory volume
VOLUME ["/data"]
That makes it impossible to create a derived image with any different content in that directory tree. (This is the same reason you can't create a mysql or postgresql image with prepopulated data.)
docker commit doesn't consider volumes at all, so you'll get an unchanged image with nothing preloaded in it.
You can see the offical documentation
Managing User Data
Once you have Node-RED running with Docker, we need to ensure any
added nodes or flows are not lost if the container is destroyed. This
user data can be persisted by mounting a data directory to a volume
outside the container. This can either be done using a bind mount or a
named data volume.
Node-RED uses the /data directory inside the container to store user
configuration data.
nodered-user-data-in-docker
one way is to restore the your config file on another machine, for example backup-config then
docker run -it -p 1880:1880 -v $PWD/backup-config/:/data --name mynodered nodered/node-red-docker
or if you want to full for some repo then you can try
docker run -it --rm -v "$PWD/$(wget https://raw.githubusercontent.com/openenergymonitor/oem_node-red/master/flows_emonpi.json)":/data/ nodered/node-red-docker
Actually, I run my containers like this, for example :
docker run -v /nexus-data:/nexus-data sonatype/nexus3
^
After reading the documentation, I discover volumes that are completely managed by docker. For some reasons, I want to change the way to run my containers, to do something like this :
docker run -v nexus-data:/nexus-data sonatype/nexus3
^
I want to transfer my existing bind-mount to volumes.
But I don't want to lose the data into /nexus-data folder, is there a possibility to transfer this folder, to the new volume, whitout restart everything ? Because I've also Jenkins and Sonar containers for example, I just want to change the way to have persistent data. The is a proper way to do this ?
You can try out following steps so that you will not loose your current nexus-data.
#>docker run -v nexus-data:/nexus-data sonatype/nexus3
#>docker copy /nexus-data/. <container-name-or-id>:/nexus-data/
#>docker stop <container-name-or-id>
#>docker start <container-name-or-id>
docker copy will copy data from your host-machine's /nexus-data folder to container's FS /nexus-data folder which is your mounted volume.
Let me know if you face any issue while performing following steps.
Here's another way to do this, that I just used successfully with a Heimdall container. It's outlined in the documentation for the sonatype/nexus3 image:
Stop the running container (e.g. named nexus3)
Create a docker volume called nexus-data, creating it with the following command: docker volume create nexus-data)
By default, Docker will store the volume's content at /var/lib/docker/volumes/nexus-data/_data/
Simply copy the directory where you previously had been using a bind mount to the aforementioned volume directory (you'll need super user privileges to do this, or for the user to be part of the docker group): cp -R /path/to/nexus-data/* /var/lib/docker/volumes/nexus-data/_data/
Restart the nexus3 container with $ docker run -v nexus-data:/nexus-data sonatype/nexus3 --name=nexus3
Your container will be back up and running, with the files persisted in the directory /path/to/nexus-data/ now mirrored in the docker volume. Check if functionality is the same, of course, and if so, you can delete the /path/to/nexus-data/ directory
Q.E.D.
Is there a way to clone a container and its data into a new one with different starting parameters?
At the moment I'm only able to start a new cloned container (from custom image) WITHOUT the data.
I tell you what I have to do: I started a "docker-jenkins" container with some starting parameters and then configured it, but now I noticed that I forgot some important starting parameters so I wanna restart the same container adding more starting parameters...
The problem is (if I understand well) that I cannot modify the starting parameters of existing running container, so my idea is to start a cloned one (data INCLUDED) with different parameters but I don't understand how to do it...
Can someone help me?
1. Using volumes
If your sole point is to persist your data you need to use Volumes.
A data volume is a specially-designated directory within one or more
containers that bypasses the Union File System. Data volumes provide
several useful features for persistent or shared data:
Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point,
that existing data is copied into the new volume upon volume
initialization. (Note that this does not apply when mounting a host
directory.)
Data volumes can be shared and reused among containers.
Changes to a data volume are made directly.
Changes to a data volume will not be included when you update an image.
Data volumes persist even if the container itself is deleted.
Source:
https://docs.docker.com/engine/tutorials/dockervolumes/
Essentially you map a folder from your machine to one into your container.
When you kill the container and spawn a new instance (with modified parameters) your volume (with the existing data) is re-mapped.
Example:
docker run -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins
Source:
https://hub.docker.com/_/jenkins/
2. Using commit to create snapshots
A different route is to make use of the docker commit command.
It can be useful to commit a container’s file changes or settings into
a new image. This allows you debug a container by running an
interactive shell, or to export a working dataset to another server.
Generally, it is better to use Dockerfiles to manage your images in a
documented and maintainable way.
The commit operation will not include any data contained in volumes
mounted inside the container.
https://docs.docker.com/engine/reference/commandline/commit/
$ docker ps
ID IMAGE COMMAND CREATED STATUS PORTS
c3f279d17e0a ubuntu:12.04 /bin/bash 7 days ago Up 25 hours
197387f1b436 ubuntu:12.04 /bin/bash 7 days ago Up 25 hours
$ docker commit c3f279d17e0a svendowideit/testimage:version3
f5283438590d
$ docker images
REPOSITORY TAG ID CREATED SIZE
svendowideit/testimage version3 f5283438590d 16 seconds ago 335.7 MB
It is also possible to commit with altered configuration:
docker commit --change='CMD ["apachectl", "-DFOREGROUND"]' -c "EXPOSE 80" c3f279d17e0a svendowideit/testimage:version4
To clone a container in docker, you can use docker commit and create a snapshot the container
Use docker images to view the docker image REPOSITORY and TAG names.
Use docker ps -a to view the available containers and note the CONTAINER ID of the container of which a snapshot is to be created.
use docker commit <CONTAINER ID> <REPOSITORY>:<TAG> to create snapshot and save it as an image.
Again use docker images to view the saved image.
To access saved snapshot
run,
docker run -i -t <IMAGE ID> /bin/bash
docker ps -a
docker start <CONTAINER ID>
docker exec -ti <CONTAINER ID> bash
I am running an instance of docker, and I would like to save my work - the docs just aren't 100% clear on how to do this, so I'm asking here. I opened the docker instance using:
docker run -it [public dockerhub name]
Now I would like to save all my work locally so that I can come back to it. I don't particularly want to check it into dockerhub, unless that's advisable.
Here's what I have done. I have opened a new docker CLI tab, and done docker ps there to find the ID of the running docker instance. Then in the same tab I tried doing this:
docker commit <docker-id> me/myinstance
This gave me a commit hash.
Can I now safely exit the running docker instance? What command would I use to open it again - do I need to store the commit hash, or can I just do docker run -it me/myinstance?
As the docs mention:
You pull an image from Docker hub
You run that image on a container using docker run <image>
When you make changes to a container, you're not changing the underlying image, so those changes are not persisted if the container is stopped. To persist the changes you've made to the container, you create a new image with docker commit <container_id>
In the example that is on Docker docs:
# What containers are running on my system?
$ docker ps
ID IMAGE COMMAND CREATED
c3f279d17e0a ubuntu:12.04 /bin/bash 7 days ago
197387f1b436 ubuntu:12.04 /bin/bash 7 days ago
# Create a new image called svendowideit/testimage, tag it as "version3"
$ docker commit c3f279d17e0a svendowideit/testimage:version3
f5283438590d
# What images do I have on my system?
$ docker images
REPOSITORY TAG ID
svendowideit/testimage version3 f5283438590d
This way, you have persisted the changes to container c3f279d17e0a, on a new image, called svendowideit/testimage:version3.
Now you have an image with your modification, so you can run it as many times as you want on a container:
$ docker run svendowideit/testimage:version3
Again, containers are stateless. Any change you make inside a container, is lost when that container stops. One way to persist data even after a container exists, is by using volumes. This way your container has access to a directory in the host filesystem, that you can read and write.
Changes made inside a container are not lost when the container exits and containers (container applications) are not stateless unless you have specifically separated the data storage from the application (by mounting folders from the host filesystem or sending data to a database outside of the container).
To see your changes persisted in a container, start the old container (docker start ~) instead of creating a new container (docker run ~).
This is easier to do if you name your containers.
ie.
docker run -it --name containerName imageName
do stuff to your container
docker kill containerName
docker start containerName
You will see that your changes are persisted in that container.
You can also commit your container as an image, which can be pushed to a registry or exported to a file.
I created a new docker container using jenkings image
This is the command I ran
docker run -p 8080:8080 -v /var/jenkins_home jenkins
I created a few jobs on the jenkins instance and commited the image
docker commit 7b903d061654 test
When I run the image I created using the command (below) I dont see the jenkins jobs below
docker run -p 8080:8080 -v /var/jenkins_home test
Am I missing anything here ? I was expecting the jenkins jobs I have created to be saved
How do I persist changes and distribute images ?
Data in a Docker volume (such as /var/jenkins_home) is not preserved as part of the docker commit operation. This is intentional -- the idea is that you are persisting you data via some other mechanism, such as a host volume (-v /host/directory:/var/jenkins_home) or through the use of a data container (using --volumes-from).
For more information about Docker volumes, see Managing data in containers.
Volumes are used in this fashion to keep data seperate from your applications. This permits you to save large data into volumes without baking it into your images when you run docker commit, or similarly to store security credentials or other private data in the volume without accidentally leaking it into an image that you intent to distribute.