In our setup, Docker container are working as Jenkins node/agents. This is done via adding docker host through "Configure cloud". Now we have to carry out some household activity on these Docker hosts. But before that I want No jenkins job should use that docker host during the activity. Is there any option to do so?
I was thinking to mark agent as offline. BUt in this case, I cannot find this option.
Please help.
Related
I want to run my Jenkins behind k8s. We can achieve that with any standard helm chart or our own manifest files. In this case, Jenkins (master only) will run inside a container (Pod).
Now I also want to have a pipeline job that uses docker agent as described here
I am getting confused, about
how and where this docker container will be run (on the same node where Jenkins is running? and suppose the node capacity is over then it needs to run docker agent on a different node)
how does Jenkins will authenticate to run containers on k8s nodes?
I saw the Kubernetes plugin/docker plugin. But those plugins create containers beforehand (or at least we need to set up a template, which decides how containers will start, which image will be used and many more) and connects Jenkins with help of JNLP / ssh. I lose the flexibility to have an image as an agent in that case.
going further, I also like to build custom images on the fly with help of Dockerfile shipped along with code. An example is available in the same link.
I believe this documentation is answering all of your questions: https://devopscube.com/jenkins-build-agents-kubernetes/
With this method, you are not losing your flexibility because your Jenkins master going to create a K8s pod on the fly. Yes, additionally you need JNLP authentication but you can think of that as a sidecar container.
About your first question: If you use exactly that way, your Jenkins jobs going to run under Jenkins master with the same Docker that your Jenkins Master is using.
I have a ec2 instance running as my jenkins master. I would like to run a container in that instance that will be used as another build executor so I can tun a few builds simultaneously.
Im having pronblems connecting these.
In the docker hub jenkins docs it says under the relevant section:
You can run builds on the master out of the box.
But if you want to attach build slave servers through JNLP (Java Web
Start): make sure you map the port: -p 50000:50000 - which will be
used when you connect a slave agent.
If you are only using SSH slaves, then you do NOT need to put that
port mapping.
but when I try to add a node in the jenkins config, it asks remote root directory (probably should be /var/jenkins ? ) and a launch method.
I don't quite understand what I should give it as its launch method to make this work and I don't understand where the port number comes into play.
What you need is Jenkins Docker plugin (link below) and follow the instructions listed here
https://wiki.jenkins.io/display/JENKINS/Docker+Plugin
I followed those instructions and was able to setup dynamic slaves in Jenkins which are dynamically provisioned.
We have a Jenkins setup with the Docker plugin installed and want to run our build jobs in Docker Containers based on private images.
Here is what we have:
Jenkins master runs on "bare metal" VM, no containerization
We have a second VM with a Docker Engine running on it, the Docker engine port from this VM is exposed and accessible from the Jenkins master via TCP
We created several Docker Templates (in the global Jenkins settings) and can use them in our build jobs, as long as we use the "never pull" strategy for the images
The problems occurs when we try to pull an image from our private registry (we use Artifactory for this and it is accessible from the Docker Engine, since we can successfully push images from our Docker VM).
Whenever we start a job in Jenkins that uses such an image which should always be pulled from the private registry, we see All nodes of label ‘OUR_IMAGE_LABEL’ are offline and the job is hanging forever.
The strange thing is that we don't see anything related to such a job in the Jenkins log (/var/log/jenkins/jenkins.log on the Jenkins master) nor do we see anything in the Docker logs (/var/log/messages on the VM with the Docker Engine).
Things work perfectly fine, if we switch back to the "Never pull" strategy and have the image locally available on the Docker engine.
Any ideas
why things are not working
how we could get at least some log messages regarding what Jenkins is doing, when it shows All nodes of label ‘OUR_IMAGE_LABEL’ are offline
I saw couple of tutorials on continuous deployment (on docker.com, on codecentric.de, on devopscube.com).
Overall I saw two approaches:
Set two types of jenkins server (master and slave). Master is in a docker container and slave on the host machine.
Jenkins server in docker container. They set up the link to the host and using that link the jenkins can create or recreate docker images.
In the first approach - I do not understand why they set up additional jenkins server residing inside the docker container. Is not it enough just to have jenkins server on host machine alongside with docker container?
The second approach seems to me a bit insecure because process from container is accessing host OS. Does it have any benefits?
Thanks for any useful info.
We are running jenkins in a docker container and we want to use the docker build step plugin. The documentation tells us:
You have to make sure that Docker service is running on slaves where you run the build. In Jenkins global configuration, you need to specify Docker REST API URL (typically somethig like http://127.0.0.1:2375)
But I see very often that people are using 0.0.0.0:2375
What is the difference and which do we have to use when we just want to use the docker daemon inside one docker container on one server (docker daemon is running on the same server)?
Regarding the differences between 0.0.0.0:2375 and 127.0.0.1:2375, according to this answer it's basically whether you want to open the host up to the outside or not.
If it's all on one server, I'm assuming both should work as it's all on the same host..