I have a script in the init.groovy.d/ directory which runs when Jenkins starts. I want to look for some job runs and stop them.
All seems to be working fine except when I enable matrix security (which we need to use on our production system).
The relevant groovy code is:
def busyExecutors = Jenkins.instance.computers.collect {
c -> c.executors.findAll { it.isBusy() } }.flatten()
def jobsFound = []
busyExecutors.each { e ->
job = e.getCurrentExecutable()
if ( e.getElapsedTime() > max_run_time_usec ) {
logger.info("${job.getUrl()} timed out - killing it")
job.setDescription("Timed out") // <----- trouble!
e.doStop()
}
}
But I'm getting this error
hudson.security.AccessDeniedException2: anonymous is missing the Run/Update permission
Really don't want to grant anonymous this permission to make this work.
Any ideas on how to get the scripts in init.groovy.d to run with say administrator permissions or as another user that I can then grant the permissions I need?
You can elevate your privileges in the script by adding the following:
hudson.security.ACL.impersonate(hudson.security.ACL.SYSTEM)
That would allow you to set the build description.
Related
First off the setup in question:
A Jenkins Instance with several build nodes and on prem Azure-Devops server containing the Git Repositories.
The Repo in question is too large to always build on push for all branches and all devs, so a small workaround was done:
The production branches have a polling enabled twice a day (because of testing duration which is handled downstream more builds would not help with quality)
All other branches have their automated building suppressed. They still can start it manually for Builds/Deployments/Unittests if they so choose.
The jenkinsfile has parameterization for which platforms to build, on prod* all the platforms are true, on all other branches false.
This helps because else the initial build of a feature branch would always build/deploy locally all platforms which would take too much of a load on the server infrastructure.
I added a service endpoint for Jenkins in the Azure Devops, added a Buildvalidation .yml - this basically works because when I call the sourcebranch of the pull request with the merge commitID i added a parameter
isPullRequestBuild which contains the ID of the PR.
snippet of the yml:
- task: JenkinsQueueJob#2
inputs:
serverEndpoint: 'MyServerEndpoint'
jobName: 'MyJob'
isMultibranchJob: true
captureConsole: true
capturePipeline: true
isParameterizedJob: true
multibranchPipelineBranch: $(System.PullRequest.SourceBranch)
jobParameters: |
stepsToPerform=Build
runUnittest=true
pullRequestID=$(System.PullRequest.PullRequestId)
Snippet of the Jenkinsfile:
def isPullRequest = false
if ( params.pullRequestID?.trim() )
{
isPullRequest = true
//do stuff to change how the pipeline should react.
}
In the jenkinsfile I look whether the parameter is not empty and reset the platforms to build to basically all and to run the unittests.
The problem is: if the branch has never run, Jenkins does not already know the parameter in the first run, so it is ignored, building nothing, and returning with 0 because "nothing had to be done".
Is there any way to only run the jenkins build if it hasnt run already?
Or is it possible to get information from the remote call if this was the build with ID 1?
The only other thing would be to Call the Jenkins via web api and check for the last successful build, but in that case I would have have the token somewhere stored in source control.
Am I missing something obvious here? I dont want to trigger the feature branch builds to do nothing more than once, because Devs could lose useful information about their started builds/deployments.
Any ideas appreciated
To whom it may concern with similar problems:
In the end I used the following workaround:
The Jenkins Endpoint is called via a user that only is used for automated builds. So, in case that this user triggered the build, I set everything to run a Pull Request Validation, even if it is the first build. Along the lines of
def causes = currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause')
if (causes != null)
{
def buildCauses= readJSON text: currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause').toString()
buildCauses.each
{
buildCause ->
if (buildCause['userId'] == "theNameOfMyBuildUser")
{
triggeredByAzureDevops = true
}
}
}
getBuildcauses must be allowed to run by a Jenkins Admin for that to work.
So, most of the questions and answers I've found on this subject is for people who want to use the SAME workspace for different runs. (Which baffles me, but then I require a clean slate each time I start a job. Leftover stuff will only break things)
My issue is the EXACT opposite - I MUST have a separate workspace for each run (or I need to know how to create files with the same name in different runs that stay with that run only, and which are easily reachable from bash scripts started by the pipeline!)
So, my question is - how do I either force Jenkins to NOT use the same workspace for two concurrently-running jobs on different hosts, OR what variable can I use in the 'custom workspace' field to accomplish this?
After I responded to the question by #Joerg S I realized that I'm saying the thing that Joerg S says CAN'T happen is EXACTLY what I'm observing! Jenkins is using the SAME workspace for 2 different, concurrent, jobs on 2 different hosts. Is this a Jenkins pipeline bug?
See below for a bewildering amount of information.
Given the way I have to go onto and off of nodes during the run, I've found that I can start 2 different builds on different hosts of the same job, and they SHARE the workspace dir! Since each job has shell scripts which are busy writing files into that directory, this is extremely bad.
In Custom workspace in jenkins we are told to use custom workspace, and I'm set up just like that
In Jenkins: how to run builds in unique directories we are told to use ${BUILD_NUMBER} in the above custom workspace field, so what I tried was:
${JENKINS_HOME}/workspace/${ITEM_FULLNAME}/${BUILD_NUMBER}
All that happens to me when I use that is that the workspace name is, you guessed it, "${BUILD_NUMBER}" (and I even got a "${BUILD_NUMBER}#2" just for good measure!)
I tried {$BUILD_ID}, same thing (uses that literally, does not substitute the number).
I have the 'allow concurrent builds' turned on.
I'm using pipelines exclusively.
All jobs here, as part of normal execution, cause the slave, non-master host to reboot into an OS that does not have the capability to run slave.jar (indeed, it has no network access at all), so I cannot run the entire pipeline on that host.
All jobs use the following construct somewhere inside them:
tests=Arrays.asList(tests.split("\\r?\n"))
shellerror=231
for( line in tests){
So let's call an example job 'foo' that loops through a list, as above, that I want to run on 2 different hosts. The pipeline for that job starts running on master (since the above for (line in tests) is REQUIRED to run on a node!)). Then goes back and forth between master and slave, often multiple times.
If I start this job on host A and host B at about the same time, they will BOTH use the workspace ${JENKINS_HOME}/workspace/${JOB_NAME}, or in my case /var/lib/jenkins/jenkins/workspace/job
Since they write different data to files with the same name in that directory, I'm clearly totally broken immediately.
So, how do I force Jenkins to use a unique workspace EVERY SINGLE JOB?
Or, what???
Other things: pipeline build step version 2.5.1, Jenkins 2.46.2
I've been trying to get the workspace statement ('ws') to work, but that doesn't quite work as I expected either - some files are in the workspace I explicitly name, and some are still in the 'built-in' workspace (workspace/).
I was asked to provide code. The 'standard' pipeline I use is about 26K bytes, composing about 590 lines. So, I'm going to GREATLY reduce. That being said:
node("master") { // 1
..... lots of stuff....
} // this matches the "node('master')" above
node(HOST) {
echo "on $HOST, check what os"
if (isUnix())
...some more stuff...
} // end of 'node(HOST)' above
if (isok == 0 ) {
node("master") {
echo "----------------- Running on MASTER 19 $shellerror waiting on boot out of windows ------------"
sleep 120
echo "----------------- Leaving MASTER ------------"
}
}
... lots 'o code ...
node(HOST) {
... etc
} // matches the latest 'node HOST' above
node("master") { // 120
.... code ...
for( line in tests) {
...code...
}
}
... and on and on and on, switching back and forth from one to the other
FWIW, when I tried to make the above use 'ws' so that I could make certain the ws name was unique, I simply added a 'ws wsname' block directly under (almost) every 'node' opening so it was
node(name) { ws (wsname) { ..stuff that was in node block before... } }
But then I've got two directories to worry about checking - both the 'default' workspace/jobname dir AND the new wsname one.
Try using customWorkspace node common option:
pipeline {
agent {
node {
label 'node(s)-defined-label'
customWorkspace "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
}
}
stages {
// Your pipeline logic here
}
}
customWorkspace
A string. Run the Pipeline or individual stage this
agent is applied to within this custom workspace, rather than the
default. It can be either a relative path, in which case the custom
workspace will be under the workspace root on the node, or an absolute
path.
Edit
Since this doesn't work for your complex pipeline. Maybe try this silly solution:
def WORKSPACE = "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
node(HOST) {
sh(script: "mkdir -p ${WORKSPACE}")
sh(script: "cd ${WORKSPACE}")
//Do stuff here
}
or if dir() is accessible:
def WORKSPACE = "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
node(HOST) {
sh(script: "mkdir -p ${WORKSPACE}")
dir(WORKSPACE) {
//Do stuff here
}
}
customWorkspace didn't work for me.
What worked:
stages {
stage("SCM (For commit trigger)"){
steps {
ws('custom-workspace') { // Because we don't want to switch from the pipeline checkout
// Generated from http://lstool01:8080/job/Permanent%20Build/pipeline-syntax/
checkout(xxx)
}
}
}
'${SOMEVAR}'
will not get substituted
"${SOMEVAR}"
will - this is how groovy strings are being handled
see groovy string handling
so if you have a
ws("/some/path/somewhere/${BUILD_ID}")
{
//something
}
on your node in your pipeline Jenkinsfile it should do the trick in this regard
the problem with #2 workspaces can occur when you allow concurrent builds of the project - I had the exact same problem with a custom ws() with #2 - simply disallow concurrent builds or work around that.
We were pretty hooked onto using some plugins which are not supported in pipelines anymore and would like to implement their usage in shared-libraries of our pipelines. One of the main items required for that would be to get hold of Jenkins Instance, can someone share a way to do that ?
Are there any restrictions or proper way to get hold of Jenkins.getActiveInstance() under "src" or "vars" folder ?
I have tried to get Jenkins.getActiveInstance() under src code as well vars code but it returns null, am I missing something here? any help will be appreciated.
thanks
Try 'Hudson.instance'. This pipeline below works for me on Jenkins 2.32.x. You may have to do some script approvals or turn off the sandbox.
pipeline {
agent none
stages{
stage('Instance Info') {
steps {
script {
def jenkinsInstance = Hudson.instance
for (slave in jenkinsInstance.slaves) {
echo "Slave: ${slave.computer.name}"
}
}
}
}
}
}
Bill
This ticket can be closed, there were few issues :
1. Fix the access via (Manage Jenkins -> In script approval)
2. some scripts contain Non-cps code
I'm developing my first plugin for Jenkins that will add some additional permissions to Jenkins' matrix based security authorization.
I'm developing the plugin in NetBeans 8.1. The plugin can build and deploy to Jenkins 1.625.3 and I can see my permission show up in the matrix table.
The plugin has a class that extends the RunListener<AbstractBuild> extension point. I override the setUpEnvironment method and in this method I'm trying to see if the user that caused the build has my new permissions.
Unfortunately, every time I call User.get(username).hasPermission(permission), the result is true. I've simplified the testing by creating two users:
adminuser: has the Administer permission
devuser: currently just have overall read and no other checkboxes checked.
If I put a debug break in my setUpEnvironment method, and add the following watch, the result is true:
User.get("devuser").hasPermission(hudson.model.Hudson.ADMINISTER)
Intuitively, I look at the code above and think hasPermission is based on the User returned by the get method. However, I'm starting to suspect that it doesn't matter that hasPermission is called on the user object, the security principle is some system user with uber access.
Can someone point me in the right direction?
Thanks!
Matrix Screenshot
Debug Watches
The problem with is that User.hasPermission(Permission p) calls ACL.hasPermission(Permission p) which in fact runs:
return hasPermission(Jenkins.getAuthentication(),p);
Therefore permissions are not checked for loaded User but for current User used to execute this code.
If you run below code from Script Console:
println instance.getAuthorizationStrategy().
hasPermission("devuser", hudson.model.Hudson.ADMINISTER)
println instance.getAuthorizationStrategy().getACL(User.get("devuser")).
hasPermission(User.get("devuser").impersonate(), hudson.model.Hudson.ADMINISTER)
println instance.getAuthorizationStrategy().getACL(User.get("devuser")).
hasPermission(User.get("devuser").impersonate(), hudson.model.Hudson.ADMINISTER)
println instance.getAuthorizationStrategy().getACL(User.get("devuser")).
hasPermission(hudson.model.Hudson.ADMINISTER)
println instance.getAuthorizationStrategy().getACL(User.current()).
hasPermission(hudson.model.Hudson.ADMINISTER)
it will return:
false
false
false
true
true
As a "workaround" try to obtain authorization strategy directly from Jenkins object and execute hasPermission(...) method from it:
def instance = Jenkins.getInstance()
instance.getAuthorizationStrategy().hasPermission("devuser", Jenkins.ADMINISTER)
How can I check whether the build is triiggered by timer or by the user.
I have a build which is scheduled to run and there times I run it manually also. Can I get who started the run from a variable or something in the same run.
We can use "BUILD_CAUSE" variable for getting the information about who initiated the run
There is Jenkins plugin which allows you to identify, among others, how the build was triggered.
In the features section of the link above you can see:
Build Cause Run
the build step depending on the cause of the build e.g. triggered by timer, user, scm-change,...
If you have a shared pipeline library, create a file startedByTimer.groovy:
#NonCPS
def call() {
boolean startedByTimer = false
def buildCauses = currentBuild.rawBuild.getCauses()
for (buildCause in buildCauses) {
if ("${buildCause}".contains("hudson.triggers.TimerTrigger\$TimerTriggerCause")) {
startedByTimer = true
}
}
return startedByTimer
}
As #Bachu already answered build cause is available in ${BUILD_CAUSE} environment variable.
Moreover for each trigger type is corresponding environment variable which can be interpreted as boolean.
For builds triggered by timer it is ${BUILD_CAUSE_TIMERTRIGGER} which for builds triggered by timer will be set to true.
Another solution would be to use the user build vars plugin:
def buildUser
wrap([$class: 'BuildUser']) {
buildUser = env.BUILD_USER
}
buildUser will contain a user name when the pipeline is triggered manually, and null otherwise.
If using pipeline 2.0, you can use:
if(manager.logContains("Started by timer")){
echo "This build was triggered by a timer."
}
You can check those information in Jenkins Console output.
Look at the console Output
The first line will be:
Started by user xxx (if a human started it)
or
Started by an SCM change (if a change in version control triggered it)
or
Started by timer (if it was scheduled)