Docker commit doesn't save the changed state of my container - docker

I am a newbie about Docker. But I have looked many guides of that. I am configuring a container that it is running in a base image of jenkins with blue-ocean plugin. I run this one using docker run command and I configured my proxy information and added another plugin, k8s plugin through Jenkins Manage Plugin UI. Then I stop this container and I commit this container to save this state that has the k8s plugin and proxy information that I set already. But I run new docker image that I have made with docker commit command I can't see any proxy information and k8s plugin. It is same image that I started. Is there something I miss?

JENKINS_HOME is set to be a volume in the default Jenkins Docker image (which I'm assuming you're using). Volumes live outside of the Docker container layered filesystem. This means that any changes in those folders will not be persisted in subsequent image commits.

Related

How to start a docker container by jenkins?

I am trying to start a container using Jenkins and a Dockerfile in my SCM.
Jenkins uses the Dockerfile from my SCM repository and builds the image on a remote server having a Dockerfile. This is done using the "cloud bees docker build and publish plugin".
When I ssh to the server, I see that the image has been built with the tags I had defined in Jenkins.
# docker image ls
What I am not able to do is run a container for the image that has been built. How to get the image-id and start the container? Shouldn't it have been very simple given many plugins are provided?
Could your problem be related to how to refer to the recently created docker in order tu run it? Can you provide an extract of your pipeline and how you are trying to achieve this?
It that was the case, there are different solutions, one being specifying a tag during the Docker creation, so you can then refer to it to run it.
In reply to how to work with image-ids, the docker build process will return the image id of the docker it creates. You can capture that id, and then use to run the docker.
start the container yourself on the VM by using standard docker run command.
use a software like watchtower to restart the container with an updated version when available

Docker in Docker, Building docker agents in a docker contained Jenkins Server

I am currently running a Jenkins with Docker. When trying to build docker apps, i am facing some doubt on if i should use Docker in Docker (Dind) by binding the /var/run/docker.sock file or by installing another instance of docker in my Jenkins Docker. I actually saw that previously, it was discouraged to use something else than the docker.sock.
I don't actually understand why we should use something else than the docker daemon from the host apart from not polluting it.
sources : https://itnext.io/docker-in-docker-521958d34efd
Best solution for "jenkins in docker container needs docker" case is to add your host as a node(slave) in jenkins. This will make every build step (literally everything) run in your host machine. It took me a month to find perfect setup.
Mount docker socket in jenkins container: You will lose context. The files you want to COPY inside image is located inside workspace in jenkins container and your docker is running at host. COPY fails for sure.
Install docker client in jenkins container: You have to alter official jenkins image. Adds complexity. And you will lose context too.
Add your host as jenkins node: Perfect. You have the contex. No altering the official image.
Without completely understanding why you would need to use Docker in Docker - I suspect you need to meet some special requirements considering the environment in which you build the actual image, may I suggest you multistage building of docker images? You might find it useful as it enables you to first build the building environment and then build the actual image (hence the name 'multistage-building). Check it out here: https://docs.docker.com/develop/develop-images/multistage-build/

Run docker container from Jenkins pipeline

I currently run a Jenkins instance inside a Docker container. I've been doing some experimenting with Jenkins and their pipelines. I managed to make my Maven app build successful using a Jenkinsfile.
Right now I am trying to automatically deploy the app I built to a docker container that's a sibling to the Jenkins container. I did already mount the /var/run/docker.sock so I have access to the parent docker. But right now I can't seem to find any good information to guide me through the next part, which is modifying the Jenkinsfile to deploy my app in a new container.
How would I go over and run my Maven app inside a sibling docker container?
It might be more appropriate to spin up a slave node (a physical or virtual box) and then run something in there with docker.
If you check your instance URL: https://jenkins.address/job/myjob/pipeline-syntax/
You will find more information about what are the commands you need for this.
Anyway best way to do this is to actually create a dockerfile and as a step copy the artifact in it, push the image to a registry and then find somewhere to deploy the just built image

Can I move docker container that includes Jenkins setups to other server?

I have a Jenkins setup in a docker container in my local computer.
Can I move it to a company's CI server and re-use job items?
I tried this at local computer
docker commit
docker push
At CI server
docker pull
docker run
However, when I run Jenkins on CI server, Jenkins was initialized.
How can I get all the configurations and job items using Docker?
As described in the docs for the commit command
The commit operation will not include any data contained in volumes
mounted inside the container.
The jenkins home is mounted as a volume, thus when you commit the container the jenkins home won't be commited. Therefore all the job configuration that is currently on the running local container won't be part of the commited image.
Your problem reduces to how would you migrate the jenkins_home volume that is on your machine, to the another machine. This problem is solved and you can find the solution here.
I would suggest however a better and more scalable approach, specifically for jenkins. The problem with the first approach, is that there is quiet some manual intervention that needs to be done whenever you want to start a similar jenkins instance on a new machine.
The solution is as follows:
Commit the container that is currently running
Copy the job configuration that is inside the container using the command: docker cp /var/jenkins_home/jobs ./jobs. This will copy the job config from the running container into your machine. Remember to clean the build folders
Create a Dockerfile that inherits from the commited image and copy the job config under the jenkins_home.
Push the image and you should have an image that you can pull and will have all the jobs configured correctly
The dockerfile will look something like:
FROM <commited-container>
COPY jobs/* /var/jenkins_home/jobs/
You need to check how the Jenkins image (hub.docker.com/r/jenkins/jenkins/) was launched on your local computer: if it was mounting a local volume, that volume should include the JENKINS_HOME with all the job configurations and plugins.
docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
You need to export that volume too, not just the image.
See for instance "Docker & Jenkins: Data that Persists ", using a data volume container that you can then export/import.

Easiest way to do docker build command within Jenkinsfile running on Jenkins slave node?

Basic example of what I want my Jenkinsfile to do:
node {
sh 'docker build -t foo/bar .'
}
It seems like I need to install docker onto the Jenkins slave image that's executing my Jenkinsfile. Is there an easy way of doing this? (That Jenkins slave image is itself a docker container)
Are my assumptions correct?
When running with Jenkins master/slaves, the Jenkinsfile is executed by a Jenkins slave
Jenkins plugins installed via the Manage Plugins section (e.g. the Docker Plugin, or Gcloud SDK plugin) are only installed on the Jenkins masters, therefore I would need to manually build my Jenkins slave docker image and install docker on the image?
Since I also need access to the 'gcloud' command (I'm running Jenkins via Kubernetes Helm/Charts), I've been using the gcr.io/cloud-solutions-images/jenkins-k8s-slave image for my Jenkins slave.
Currently it errors out saying "docker: not found"
My assumption is that you want to docker build inside the Jenkins slave (which is a Kubernetes pod, I assume created by the Kubernetes Jenkins Plugin)
To set the stage, when Kubernetes creates pod that will act as a Jenkins slave, all commands that you execute inside the node will be executed inside that Kubernetes pod, inside one of the containers there (by default there will only be one container, but more on this later).
So you are actually trying to run a Docker command inside a container based on gcr.io/cloud-solutions-images/jenkins-k8s-slave, which is most likely based on the official Jenkins JNLP Slave, which does not container Docker!
From this point forward, there are two approaches that you can take:
use a slightly modified image based on the JNLP slave that also contains the Docker client and mount the Docker socket (/var/run/docker.sock) inside the container.
(You can find details on this approach here).
Here is an image that contains the Docker client and kubectl.
Here is a complete view of how to configure the Jenkins Plugin:
Note that you use a different image (you can create your own and add any binary you want there) and that you mount the Docker socket inside the container.
the problem with the first approach is that you create a new image forked from the official JNLP slave and manually add the Docker client. This means that whenever Jenkins or Docker have updates, you need to manually update your image and entire configuration, which is not that desirable.
Using the second approach you always use official images, and you use the JNLP slave to start other containers in the same pod.
Here is the full file from the image below
Here is the Jenkins Plugin documentation for doing this
As I said, the JNLP image will start a container that you specify in the same pod. Note that in order to use Docker from a container you still need to mount the Docker sock.
These are the two ways I found to achieve building images inside a Jenkins JNLP slave running inside a container.
The example also shows how to push the image using credential bindings from Jenkins, and how to update a Kubernetes deployment as part of the build process.
Some more resources:
deploy Jenkins to Kubernetes as Helm chart, configure plugins to install
Thanks,
Radu M

Resources