Jenkins Dynamic Agents - Clone from VM conflicts with node/agents/labels - jenkins

Is it possible to provision multiple vms from an agent template/snapshot and access them within a Jenkins job? Or does this limit have to be known ahead of time and each pre-provisioned and connected to Jenkins?
Reading the documentation on Distributed Builds and vsphere plugin I have this perception that I could have a template VM from which I dynamically provision as many clones as I need (limited by concurrent build limits) and connect and build on those - however when it comes to implementation I have two problems:
1) The agent tries to connect to the same nodes defined in /computer (cloning and have static ips so have lots of conflicts there)
2) If I name the vm clone as something else, the label is not recognized as a valid node (i.e. Clone from a VM attached as node 'Agent1' to 'Agent2' -- using the label 'Agent2' does not connect to the new vm since Agent2 is not a valid node)

You can have Jenkins create new nodes from a template and name them with an incrementing counter (i.e. prefix of 'WinAgent-' create 'WinAgent-1', W'WinAgent-2', etc and they show up as new nodes under the executors)
1) This is an issue of static IPs. Use VM configurations to change the IP or setup dhcp. Use the vm options to have Jenkins send in the name of the agent. For example, in esx use vmtoolsd --cmd "info-get guestinfo.SLAVE_JNLP_URL" Use a script to start the agent using the parameters from the vm options.
2) When setup in the vsphere cloud portion of 'Manage Jenkins > Configure System' the system will create a new node automatically. All you have to do in the script is use the configured label.

Related

Jenkins Manage Nodes and Clouds in Google Kubernetes Engine Cluster

I am just started learning Jenkins deployment on Google Kubernetes engine. I was able to successfully deploy an application to my GKE instance. However, I couldn't figure out how to manage Nodes and Clouds.
Any tutorial or guidance would be highly appreciated.
Underlying idea behind nodes : Just one node may not be sufficient/effective to run multiple jobs so to distribute the load jobs are transferred to a different node to attain a good performance.
Prerequisites
#1 : A instance (Lets’ say it DEV) which is hosting Jenkins (git, maven, Jenkins)
#2 : A instance (Let’s call it Slave) which will be used to serve as host machine for our new node
In this machine you need to have java installed
A pass wordless connection should be established between two instances.
To achieve it enable password authentication >generate key in main machine i.e., Dev machine and copy this key into Dev machine.
Create a directory “workspace” in this machine (/home/Ubuntu/workspace)
Now Let's get started with Jenkins part -
Go to manage Jenkins> Manage nodes and cloud
By default Jenkins contains only the master node
To create a new node one could use the option “new node” available on the right side of the screen.
Provide a name to new node, mark it as permanent agent
Define remote root directory : It is the directory which is defined by you.
For e.g., a location like
“/home/Ubuntu/workspace “
Provide a label of your choice for e.g., let’s give the label as “Slave_lab”
Label = slave_lab
Now define your Launch method
Let’s select “Launch agent via execution of command on the master”
In the command put command as :
Ssh Ubuntu#private_IP_of_slave java -jar slave.jar
Note : here by #private_IP_of_slave i mean the IP of machine which will be used for our new node
Now we could process to configure jobs to be run on our new node
For that right click on your job > select configure
Under the general tab select the following
"Restrict where this project can be run" and provide the label "slave_lab"
Now when you’ll run the job it will be executed on the slave node not on the master name.

TFS (On prem) release pipeline - Discover and use the name of another machine in a deployment group

I have a web site (site A) deployed on machine A, which depends on a service (service B) deployed onto Machine B.
Machine A and B are in the same deployment group, differentiated by tags (App and Service respectively) and I have 2 deployment phases (one for each tag) pushing the code out to the respective boxes
I need to write a value into the configuration of Site A to tell it the location of Service B.
Is there a way of discovering the name of the machine that Service B was deployed to, to keep my deployment truly dynamic?
Put another way, can I discover the name of a machine with a given deployment tag and use it in a variable?
I've tried running local powershell on the deployment agents to update a variable but that update doesn't seem to make it back to the controlling agent so it can't pass the values across between machines.
My fallback is just to use known server names and write the values into configuration but that feels like a massive hack given how dynamic the rest of the system is.
I'm using TFS 2018 on-prem - the GUI based deployment pipeline (no YAML)
There are predefined agent variables that allow you to reference the machine name in your pipeline.
1,You can refer to the machine name by wrap the predefined variable in "$()", eg. "$(Agent.MachineName)" or "$(Agent.Name)"
This method will get the Agent Name from the Agent.Name property in the Capabilities of the agent.
2,There is another workaround. You can also add a powershell task to script below script to get the local machine name which hosts the agent and assign it to a variable.
You need to define a variable(eg.MachineName) in the Variables tab of your pipeline
echo "##vso[task.setvariable variable=MachineName]$([System.Net.Dns]::GetHostName())"
The second method will get the Machine name from the on premise Computer's properties.

Is there a way for a docker pipeline file to determine the image of the child node it runs on?

I'd like to be able to dynamically provision docker child nodes for builds and have the configuration / setup of those nodes be part of the Jenkinsfile groovy script it uses.
Limitations of the current setup of jobs means Jenkins has one node/executor (master) and I'd like to support using Docker for nodes to alleviate this bottleneck.
I've noticed there's two ways of using a docker container as a node:
You can use the agent section in your pipeline file which allows you to specify an image to use. As part of this, you can target a specific node which supports running docker images, but I haven't gotten that far as to see what happens.
You can use the Jenkins Docker Plugin which allows you to add a Docker Cloud in Jenkins' configuration. It allows you to specify a label which, when used as part of a build, will spawn a container in that "cloud" from the image chosen in the cloud configuration. In this case, the "cloud" is the docker instance running on the Jenkins server.
Unfortunately, it doesn't seem like you can use both together - using the label but specifying a docker image in the configuration (1) where the label matches a docker cloud template configuration (2) does not seem to work and instead produces a label not found error during the build.
Ideally I'd prefer the control to be in the pipeline groovy file so the configuration is stored with the application (1), not with the Jenkins server (2). However, it suggests that if I use the agent section and provide a docker image, it still must target an existing executor first (i.e. master) which will cause other builds to queue until the current build is complete.
I'm at a point of migrating builds, so not all builds can support using a docker container as the node yet, and builds will have issues when ran in parallel on the master node.
Is there a way for a docker pipeline file to determine the image of the child node it runs on?
There are a few options I have considered but not attempted yet:
Migrate jobs to run on the "docker cloud" until all jobs support running on child container nodes, then move the configuration from Jenkins to the pipeline build file for each job and turn on parallel builds on the master node.
Attempt to add a new node configuration which is effectively a copy of master (uses the same server, just different location). Configure it to support parallel builds, and have all migrated jobs target the node explicitly during builds.

Jenkins trigger on-demand slaves in dockers

I'm looking for a way to run Jenkins jobs/build inside Jenkins slaves, dynamically (on-demand) started docker. Attaching schema for visual understanding.
What I'm actually looking for and my flow looks like:
1) Triggering Jenkins job (manually/git/gerrit)
2) Jenkins master (running in docker) starts slave machine docker (and pass script/instructions of the build)
3) Build is running on Jenkins slave (or slaves if parallel/pipeline)
4) Result returned to Jenkins master
5) Jenkins slave docker stops
Is it possible to do it this way?
Docker slave image creation steps like installing openssh, user creation, mentioned in the below link. Install docker plugin from the below link.
Click here!
Go to jenkins global configuration, Under cloud headings, docker configuration will be there, enter docker host url with port number, credential not required. give some values for connection timeout & read timeout.
Under docker template - Enter the docker image name which we created in the point number 1.
Set Label number (Give this label name during Jenkins job creation and restrict to this slave name),
Select the usage option - > only build job with label restriction.
No of executor -> minimum 1. Select launch method as ssh, enter the user credential to login, which we created in docker image in the step number 1.
create a job restrict to docker slave label, run, ondemand it wil spun up container.
Use this plugin: https://wiki.jenkins-ci.org/display/JENKINS/Yet+Another+Docker+Plugin
After instalation (it requires Java 1.8) naviaget to configuration. There are two steps:
configure docker "cloud"
add "instances" (docker images) you want to run the build on
Every image should have label assigned - use this label in you job configuration to tell Jenkins explicitlyon which node the job should be run

Jenkins Swarm plugin - Evironmental variables

I have latest Jenkins and using it's latest Swarm Plugin.
I have written Ansible modules/roles/playbooks to setup install various tools/configuration on a given target node (which I would like to use as a Swarm slave node).
After Ansible playbook run is complete, I now see a new Slave is created and attached to my Jenkins master but Swarm Plugin's docs (Available Options) doesn't mention how to create ENVIRONMENT variables in the slave. https://wiki.jenkins-ci.org/display/JENKINS/Swarm+Plugin
My question is:
How can I have multiple slaves created on a same target machine and they all have their own individual settings for setting various tools like JAVA_HOME, M2_HOME, GRADLE_HOME, PATH etc.
How can I set ENVIRONMENT variables for a slave using Swarm plugin?
This is required as if I created a slave whose default JAVA is jdk1.7.0_67, then I would like to create another slave whose default JAVA_HOME is jdk1.8.0_45. Similarly, the end goal is to have various flavors of such slaves with various tools if possible, where each slave's tools are slightly different. I'll assign the LABEL(s) accordingly and use it in a Jenkins job's configuration so that a job runs only using / on these slave if the associated label is assigned/tied to the job.
I tried using https://github.com/MovingBlocks/GroovyJenkins/blob/master/src/main/groovy/AddNodeToJenkins.groovy but not sure how I can automatically define/set ENVIRONMENT variables in the slave's configuration.
I'm assuming you're running on Linux here.
You can have a shell script to export the new environment before calling the swarm-client. These variables will be inherited by the new swarm slave
https://unix.stackexchange.com/questions/130985/if-processes-inherit-the-parents-environment-why-do-we-need-export
Alternatively you could run docker and have a separate swarm slave containers https://hub.docker.com/r/csanchez/jenkins-swarm-slave/ and put your specific install into the Dockerfile and add a new ENTRYPOINT in the bottom of the Dockerfile
ENTRYPOINT ["/usr/local/bin/jenkins-slave.sh" \
"-labels", "label1", "-labels", "label2"]

Resources