slave container provisioning on kubernetes - jenkins

i am using kubernetes plugin in jenkins.i want to run an init script before slave container provision on kubernetes.the kubernetes plugin pod template won't allow me to or i cannot find a way to run it.can anybody please help me with this.i need to run a certain set of commands on the kubernetes slave container before it provisions.this is how my config looks like

1 - You don't need to use the slave image, jenkins will have an agent container regardless, you should only specify containers you want it to run additionally.
2 - On the containers you can specify the entrypoints or pre-provision whatever you want beforehand and just worry about the execution. That means you can get a container ready to go and assume the code will be there, if you need to run any extra commands on the code, you can just add an extra script step
3 - In order for your step to be executed in a container, you have to be explicit in your pipeline, otherwise it will run on the master.
I can't really guide you to using the UI because I use the Jenkinsfile inside projects I want to build.

Related

Pass binary from Jenkins host to agent

Can you pass a binary from a Jenkins host to an agent?
I've got Jenkins running in Kubernetes, and the terraform plugin installed on my Jenkins master with the binary located at /var/jenkins_home/tools/org.jenkinsci.plugins.terraform.TerraformInstallation/terraform/terraform
I would like to pass this to my Jenkins agent by configuring my pod template and mounting the host volume path /var/jenkins_home/tools/org.jenkinsci.plugins.terraform.TerraformInstallation/terraform/terraform to the agent's path /usr/bin/terraform
But this doesn't seem to work as expected
When I exec into the agent and run a terraform version I get the error bash: terraform: command not found indicating that it doesn't have the binary.
I can see a terraform directory mounted in /usr/bin but without the binary. What I expect is for terraform to be installed on the agent. But my thinking might be incorrect here.
Is it possible to do this, has anyone has any experience with this?
As a #David Maze mentioned binary from Jenkins needs to be manually installed on every node, which can be a difficult to manage. However you can set Jenkins to run pipeline steps inside a container where the image contains the tools you need, which simplifies such case.
Read more: execution-env-jenkins.
One alternative is to use the slaves setup plugin. We use it to install and configure internal tools (and end) on nodes bases on labels. A log less hassle than #Malgorata's (and our previous) manually copy approach
Not sure how well it works with Kubernetes as not in our configuration.

Gitlab CI using Docker - how to rename the build container?

We have set up our CI environment using Gitlab CI using Docker images as described here.
Using the default settings the build containers get rather generic names like runner-oh2M8zk--project-206-concurrent-0-build-4.
The machine, that hosts the Gitlab runner, also runs several other containers. To be able to monitor activity on that machine, we want to be able to identify which projects (and maybe commits or branches) triggered a certain build container. With the current naming this is next to impossible.
Afaik, the documentation does not give any hint on how to specify the name for a container run by Gitlab runner. Is there any option (presumably in .gitlab-ci.yml) to set that name or at least specify a prefix/suffix for it?

How can I user docker image jenkins/jnlp-slave with Jenkins docker-swarm-plugin (pass agent name)

I have a swarm of three nodes (one manager, two workers). In my swarm, I am running a jenkins service with docker-swarm-plugin (https://github.com/jenkinsci/docker-swarm-plugin) installed. I want to use the plugin to create a build agent container in my swarm for every jenkins job. For the agents I want to user the jenkins/jnlp-slave docker image (https://hub.docker.com/r/jenkinsci/jnlp-slave/). The image expects two arguments for the start:
secret (can be set via JENKINS_SECRET environment variable)
agent name (can be set via JENKINS_AGENT_NAME environment variable)
The docker-swarm-plugin creates three environment variables:
$DOCKER_SWARM_PLUGIN_JENKINS_AGENT_SECRET (I use this to set the secret)
$DOCKER_SWARM_PLUGIN_JENKINS_AGENT_JAR_URL
$DOCKER_SWARM_PLUGIN_JENKINS_AGENT_JNLP_URL (this contains the agent name)
I pass the secret to the agent via JENKINS_SECRET environment variable (in ENV section of Jenkins plugin configuration):
JENKINS_SECRET=$DOCKER_SWARM_PLUGIN_JENKINS_AGENT_SECRET
I tried to pass the agent name by using a regular expression (also in ENV section of Jenkins plugin configuration):
JENKINS_AGENT_NAME=`echo $DOCKER_SWARM_PLUGIN_JENKINS_AGENT_JNLP_URL | sed ...`
But the command is not executed (I understand that this is for security reasons to avoid code injection).
What do I want to reach:
I want to run jenkins on my docker swarm and I want jenkins to run every job in an own build agent container that is dropped after the job finished. And I want the build agent containers to spread across the swarm (jenkins docker-plugin launches them on the node where the jenkins master is running). I understood that the docker-swarm-plugin should do exactly what I want to do. And I think the jenkins/jnlp-slave image is there to build agent containers as I want to use. But I can't find a solution how to get them work together.
Can anyone give me some advice?
Should I maybe use another image that is working better with the plugin?
I opened issue https://github.com/jenkinsci/docker-swarm-plugin/issues/37 on docker-swarm-plugin for this and now with PR https://github.com/jenkinsci/docker-swarm-plugin/pull/39 a new environment variable is added with the created agents name. This can be passed to the docker image and everything works fine!

Is there a way for a docker pipeline file to determine the image of the child node it runs on?

I'd like to be able to dynamically provision docker child nodes for builds and have the configuration / setup of those nodes be part of the Jenkinsfile groovy script it uses.
Limitations of the current setup of jobs means Jenkins has one node/executor (master) and I'd like to support using Docker for nodes to alleviate this bottleneck.
I've noticed there's two ways of using a docker container as a node:
You can use the agent section in your pipeline file which allows you to specify an image to use. As part of this, you can target a specific node which supports running docker images, but I haven't gotten that far as to see what happens.
You can use the Jenkins Docker Plugin which allows you to add a Docker Cloud in Jenkins' configuration. It allows you to specify a label which, when used as part of a build, will spawn a container in that "cloud" from the image chosen in the cloud configuration. In this case, the "cloud" is the docker instance running on the Jenkins server.
Unfortunately, it doesn't seem like you can use both together - using the label but specifying a docker image in the configuration (1) where the label matches a docker cloud template configuration (2) does not seem to work and instead produces a label not found error during the build.
Ideally I'd prefer the control to be in the pipeline groovy file so the configuration is stored with the application (1), not with the Jenkins server (2). However, it suggests that if I use the agent section and provide a docker image, it still must target an existing executor first (i.e. master) which will cause other builds to queue until the current build is complete.
I'm at a point of migrating builds, so not all builds can support using a docker container as the node yet, and builds will have issues when ran in parallel on the master node.
Is there a way for a docker pipeline file to determine the image of the child node it runs on?
There are a few options I have considered but not attempted yet:
Migrate jobs to run on the "docker cloud" until all jobs support running on child container nodes, then move the configuration from Jenkins to the pipeline build file for each job and turn on parallel builds on the master node.
Attempt to add a new node configuration which is effectively a copy of master (uses the same server, just different location). Configure it to support parallel builds, and have all migrated jobs target the node explicitly during builds.

How to trigger a Jenkins job at boot

When running Jenkins as docker container, some advanced setup may be lost at upgrade (or restart). My typical example is to download wildfly-cli jar into /var/lib/jenkins/war/WEB-INF/lib/ for wildfly-deployer
I find it easy to implement such setup thanks to a Jenkins job.
And I now face the following question: is there a way to trigger that Jenkins job only once after system/jenkins boot ?
I have an idea, which might be somewhat hacky: Build a custom docker container based on the original Jenkins container and add an extra step to your docker file.
That extra step would be triggering that Job. Jenkins does have an option to start job externally, e.g., from a script, or in your case from a docker file.
You can rebuild and restart that container and it will run the build once. Would that work for you?

Resources