Jenkins 2.x Slave-Agent Variables in Pipelines - jenkins

I have tried every permutation that I can find to pull a pre-existing variable from a specific Jenkins Slave and I cannot find a solution.
We have a git branch variable defined on each slave agent as a default branch for all builds initiated on that slave. This is to ensure that all DSL scripted job config is tested on our dev machine before it is promoted to a higher jenkins environment.
I have created a pipeline that will build all the components needed to stand up a new jenkins (with all of our enterprise deployment pipelines created) and it needs access to that one specific variable to correctly build the jobs based on the jenkins master/slave combo that it is running on.
I need a way (in a jenkinsfile) to access the variables that are configured on a particular jenkins slave machine.

Related

Ephemeral Jenkins Pipeline Jobs from Github and Jenkinsfile

I have automated Jenkins master and slaves deployment and redeployment successfully.
I know how to manually create pipeline jobs and add github repos to use their Jenkinsfiles for the steps.
my issue is how can I automate the pipeline jobs addition to jenkins after its been destroyed and redeployed without having to manually create the pipeline jobs and point to Jenkinsfile each time.
I have seen this done before in a container environment with chef and docker when redeployed or updated it re-adds all the pipelines automatically again.
I want to not use the UI at all only to confirm job status progress and verify settings.
I would recommend looking at the JobDSL Plugin to create jobs, using a seed job to create them on initial Jenkins startup. The Jenkins Configuration-as-Code plugin can be used to setup any other configuration outside the jobs.

Jenkins: choose among different deployment methods (Master slave vs ansible)

I have three VMs which I have used to deploy develop, staging and master branch of a project.
Lets say jenkins is running on VM named JEN
Develop branch on VM named DEV
Staging branch on VM named STAGE
And Master branch on VM named MASTER
I have made three slave node (DEV, STAGE, MASTER) on Jenkins and thee different branch's Jenkinsfile run on different VMs(DEV, STAGE, MASTER).
Another approch I am coming through is:
Not to make DEV, STAGE, MASTER as slave node. That is we have only one Jenkins Agent (JEN).
Run pipeline and the tests in it on JEN and use ANSIBLE to deploy remotely on (DEV, STAGE, MASTER)
How would that compare with the first approach?
First, I believe it is Ansible, not ancible.
Second, the interest of an Ansible deployment model is that is is agentless (as opposed to Jenkins, which needs an agent listener agent.jar)
So if what you need to deploy is not the sources but deliverables, Ansible is more suited for that task, provided the target machines are accessible through SSH.
The Jenkins pipeline would simply do a tower_cli call to the right Ansible Job Template: that is what I have in my deployment platform.

How to set in Jenkins a credential id for Perforce SCM before starting the job in a slave node?

Let me explain what I have before exposing my question. I have a Jenkins with a seed project that creates jobs from groovy scripts using the Job DSL plugin.
I have a job that uses Perforce as SCM. This has been setup from the groovy and the Perforce credentials have been also set using the id passed to credential() inside scm{perforce{credential("perforce1")}}
This job is configured for running only in my slave nodes.
What I want to do is: I would like the slave node, before the SCM step, sets the credentials for Perforce based in something like environment variable (ex: NODE_SCM) so when launch the build process, the node would set the credentials for using it before the Perforce SCM starts the process.
The credentials for Perforce is now in Jenkins but could be created in runtime or something like that if it is possible what I want to do.
Example: Imagine I have two credentials stored in my Jenkins (perforce1 and perforce2). The variable NODE_SCM would be one of those ids and set it for building the job in the slave node.
I don´t know if I explain correctly what I want to achieve.
Thanks for your attention in advance.
Best regards

How to checkout and run pipeline file from TFS on specific node in Jenkins?

I am trying to run a pipeline job that get its' pipeline file from TFS but the mapping of the workspace and the checkout is done on the Master instead of the Slave.
I have Jenkins-master which is installed on a linux machine and I connected a windows machine as a slave to it. I created a pipeline job with 'Pipeline script from SCM' option selected for TFS.
How can I make the windows slave run that pipeline job?
The master can't run that job because it is running on linux and it fails when it is trying to map a workspace to TFS in order to download the pipeline script and run it.
Even if I create another pipeline job and select to hard-code a script to run my original pipeline job like this:
node('WIN_SLAVE') {
build job: 'My_Pipeline'
}
It doesn't work.
And I can see in the output that the initiali script (above) is in fact running on my windows slave, but when it's building the job 'My_Pipeline' it still tries to map a workspace to the Jenkins-master at it's linux machine path /var/jenkins/... and it fails.
If the initial pipeline script ran at the windows slave, why does the other pipeline script not running on the same node? Why is it trying again to checkout the pipeline file from TFS to the Jenkins-Master?
How can I make the windows slave checkout the pipeline file and run it?
Here are some things to check...
Make sure you disabled the original job, or you are completely redefining it for running on the slave, because you indicated you set up “another job” for the slave. It appears that this other job is just triggering the previous job, rather than defining its own specifications. When the job is ran on the slave, it’s just running whatever settings are in that original job.
Also, If you have the box checked to build when a change is pushed to TFS, then your original job could still be trying to run every time a change is made to TFS.
Verify the slaves Remote root directory is set properly in the slave configuration under Manage Jenkins -> Manage Nodes.
Since this slave job is triggering the other job you originally created on the master, then it will build on the master as expected.
Instead of referencing the My_Pipeline job, change the My_Pipeline job itself to run on the slave. If you are using a declarative Pipeline for the original job, then change that original job to run on the slave within the original job settings. You can do it similarly to how you have indicated above, just define the node in the original job.
If the original job is a freestyle project, there is a checkbox titled Restrict where this project can be run. Check that and include the name of the slave in the Label Expression. When you run the job, it will then be restricted to the slave.
Lastly, posting the My_Pipeline job will be helpful.

Jenkins and gitlab sharing build slaves

Let's say you have a gitlab instance and it already uses Jenkins for all its CI builds via the gitlab Jenkins plugin, etc. The Jenkins setup has a modest collection of build slaves providing a variety of platforms, etc. and each slave is set up to run just one job at a time (i.e. a Jenkins job gets exclusive access to the build slave, which is important for reasons I won't go into here).
Now let's say you want to consider using gitlab's own native CI support, moving one or more projects over to gitlab instead of Jenkins. The gitlab CI would need to use the same set of build slaves, but it needs to play nice with Jenkins and the two need to cooperate so that if one runs a job on a particular slave, the other won't submit a job to that same slave until the first finishes. In effect, while Jenkins is running a job on a slave, gitlab should see that slave as unavailable and vice versa.
Anyone have working methods for getting gitlab to tell Jenkins it is using a slave while it runs a CI job on there and vice versa? The method doesn't have to be 100% bullet proof, it would potentially be okay if both gitlab and Jenkins run a job on the same slave at the same time if it is a rare event (i.e. race conditions could potentially be tolerated if the frequency of occurrence is likely to be low).
Additional info:
Build slaves include Linux, Windows and Apple.
Docker is not used and would not be permitted at this time.
We have full admin access to everything, but changing code in gitlab or Jenkins themselves would be rejected. Adding scripts or plugins would be okay.

Resources