Volume mount is not happening using Kubernetes plugin in Jenkins - jenkins

I am using Kubernetes to spin up jenkins slaves on for my builds. I can get the plugin working without any issues.
Now, I am trying to mount a volume using the plugin. After adding the Volumes information in the plugin, its not even starting the container. I am not sure what is missing here.
In Docker file, I have added this line:
VOLUME /home/myslave
In POD Template(under jenkins configuration) I have these volume configurations:
Host path:/jenkins/slave
Mount path:/home/myslave
Thanks in advance.

You need to do 3 actions to fix this issue:
rename your container name to jnlp in the kubernetes plugin.
keep the JNLP configured correctly in your images ENTRYPOINT
let "command to run" and arguments to pass to the command empty.

Related

Docker in Docker, Building docker agents in a docker contained Jenkins Server

I am currently running a Jenkins with Docker. When trying to build docker apps, i am facing some doubt on if i should use Docker in Docker (Dind) by binding the /var/run/docker.sock file or by installing another instance of docker in my Jenkins Docker. I actually saw that previously, it was discouraged to use something else than the docker.sock.
I don't actually understand why we should use something else than the docker daemon from the host apart from not polluting it.
sources : https://itnext.io/docker-in-docker-521958d34efd
Best solution for "jenkins in docker container needs docker" case is to add your host as a node(slave) in jenkins. This will make every build step (literally everything) run in your host machine. It took me a month to find perfect setup.
Mount docker socket in jenkins container: You will lose context. The files you want to COPY inside image is located inside workspace in jenkins container and your docker is running at host. COPY fails for sure.
Install docker client in jenkins container: You have to alter official jenkins image. Adds complexity. And you will lose context too.
Add your host as jenkins node: Perfect. You have the contex. No altering the official image.
Without completely understanding why you would need to use Docker in Docker - I suspect you need to meet some special requirements considering the environment in which you build the actual image, may I suggest you multistage building of docker images? You might find it useful as it enables you to first build the building environment and then build the actual image (hence the name 'multistage-building). Check it out here: https://docs.docker.com/develop/develop-images/multistage-build/

using docker-compose on a kubernetes instance with jenkins - mounting empty volumes

I have a Jenkins instance setup using Googles Jenkins on Kubernetes solution. I have not changed any of the settings of the Kubernetes Pod.
When I trigger a new job I am successfully able to get everything up and running until the point of my tests.
My tests use docker-compose. First I make sure to install docker (1.5-1+b1) and docker-compose (1.8.0-2) on the instance (I know I can optimize this by using an image that already includes these, but I am still just in proof-of-concept).
When I run the docker-compose up command everything works and the services start their initialization scripts. However, the mounts are empty. I have verified that the files exist on the Jenkins slave, and the mount is created inside the docker service when I run docker-compose, however they are empty.
Some information:
In order to get around file permissions I am using /tmp as the Jenkins Workspace. I am using SCM to pull my files (successfully) and in the docker-compose file I specify version: '2' and the mount paths with absolute paths. The volume section of the service that fails looks like this:
volumes:
- /tmp/automation:/opt/automation
I changed the command that is run in the service to ls /opt/automation and the result is an empty directory.
What am I missing? I just want to mount a directory into my docker-compose service. This works perfectly from Windows, Ubuntu, and Centos devices. Why won't it work using the Kubernetes instance?
I found the reason it fails here:
A Docker container in a Docker container uses the parent HOST's Docker daemon and hence, any volumes that are mounted in the "docker-in-docker" case is still referenced from the HOST, and not from the Container.
Therefore, the actual path mounted from the Jenkins container "does not exist" in the HOST. Due to this, a new directory is created in the "docker-in-docker" container that is empty. Same thing applies when a directory is mounted to a new Docker container inside a Container.
So it seems like it will be impossible to mount something from the outer docker into the inner docker. And another solution must be found.

Easiest way to do docker build command within Jenkinsfile running on Jenkins slave node?

Basic example of what I want my Jenkinsfile to do:
node {
sh 'docker build -t foo/bar .'
}
It seems like I need to install docker onto the Jenkins slave image that's executing my Jenkinsfile. Is there an easy way of doing this? (That Jenkins slave image is itself a docker container)
Are my assumptions correct?
When running with Jenkins master/slaves, the Jenkinsfile is executed by a Jenkins slave
Jenkins plugins installed via the Manage Plugins section (e.g. the Docker Plugin, or Gcloud SDK plugin) are only installed on the Jenkins masters, therefore I would need to manually build my Jenkins slave docker image and install docker on the image?
Since I also need access to the 'gcloud' command (I'm running Jenkins via Kubernetes Helm/Charts), I've been using the gcr.io/cloud-solutions-images/jenkins-k8s-slave image for my Jenkins slave.
Currently it errors out saying "docker: not found"
My assumption is that you want to docker build inside the Jenkins slave (which is a Kubernetes pod, I assume created by the Kubernetes Jenkins Plugin)
To set the stage, when Kubernetes creates pod that will act as a Jenkins slave, all commands that you execute inside the node will be executed inside that Kubernetes pod, inside one of the containers there (by default there will only be one container, but more on this later).
So you are actually trying to run a Docker command inside a container based on gcr.io/cloud-solutions-images/jenkins-k8s-slave, which is most likely based on the official Jenkins JNLP Slave, which does not container Docker!
From this point forward, there are two approaches that you can take:
use a slightly modified image based on the JNLP slave that also contains the Docker client and mount the Docker socket (/var/run/docker.sock) inside the container.
(You can find details on this approach here).
Here is an image that contains the Docker client and kubectl.
Here is a complete view of how to configure the Jenkins Plugin:
Note that you use a different image (you can create your own and add any binary you want there) and that you mount the Docker socket inside the container.
the problem with the first approach is that you create a new image forked from the official JNLP slave and manually add the Docker client. This means that whenever Jenkins or Docker have updates, you need to manually update your image and entire configuration, which is not that desirable.
Using the second approach you always use official images, and you use the JNLP slave to start other containers in the same pod.
Here is the full file from the image below
Here is the Jenkins Plugin documentation for doing this
As I said, the JNLP image will start a container that you specify in the same pod. Note that in order to use Docker from a container you still need to mount the Docker sock.
These are the two ways I found to achieve building images inside a Jenkins JNLP slave running inside a container.
The example also shows how to push the image using credential bindings from Jenkins, and how to update a Kubernetes deployment as part of the build process.
Some more resources:
deploy Jenkins to Kubernetes as Helm chart, configure plugins to install
Thanks,
Radu M

Docker commit doesn't save the changed state of my container

I am a newbie about Docker. But I have looked many guides of that. I am configuring a container that it is running in a base image of jenkins with blue-ocean plugin. I run this one using docker run command and I configured my proxy information and added another plugin, k8s plugin through Jenkins Manage Plugin UI. Then I stop this container and I commit this container to save this state that has the k8s plugin and proxy information that I set already. But I run new docker image that I have made with docker commit command I can't see any proxy information and k8s plugin. It is same image that I started. Is there something I miss?
JENKINS_HOME is set to be a volume in the default Jenkins Docker image (which I'm assuming you're using). Volumes live outside of the Docker container layered filesystem. This means that any changes in those folders will not be persisted in subsequent image commits.

Docker container Jenkins - access home path

I have just started Jenkins setup on docker. I started docker container and am planning to run the ANT script I have written, this is where problems started.
Jenkins kept on reporting
ERROR: Unable to find build script at /var/jenkins_home/workspace/SampleSCM/.SampleProject/build.xml
I am not sure how to access /var/Jenkins_home in my local host. Can someone please help ?
Thanks.
You won't find this location on your laptop, because it is not there. It is inside the docker container.
Normally you would checkout out the sources as part of your build. You do not put them there yourself.
If you want to see the files you can use the jenkins gui, or ssh/attach your container (docker attach) and look in there. The idea about docker is that it runs isolated, unless you tell it to map volumes (See here for a reference)

Resources