Determining how a Jenkins job was started - jenkins

(I'm new to Jenkins and curl, so please forgive any imprecision below.)
I am running a Jenkins job on one network that sends a curl command to a second network that is used to start a Jenkins job on that second network.
Sometimes I have to log onto that second network and restart the job using the Rebuild button provided by the Rebuild plugin.
I need to know how to determine whether the job on the second network was started by the original curl command or restarted via the Rebuild plugin, without the user having to do anything but restart the job with the same parameters.
I could use an extra boolean-parameter in the job on the second network that can be set to true by the curl command and to false when using the Rebuild button, but that requires the user to manually change the value of that parameter. I don't want the user to have to do that.

I think this only possible by using different users, or lets say an own user for the remote invocation. Anything more informative would have to be done manually for instance by an own paramter for the job. Normally jobs are started by developers, schedulers, hooks or by other jobs on the same instance. If triggered remotelly the job is triggered by a user as well, without authentication its the anonymus user who triggers the job.
Do you know the Jenkins CLI, could replace your curl commands?
https://jenkins.io/doc/book/managing/cli/

Related

Call jobs on demand

We have a docker container which is a CLI application, it runs, does it s things and exits.
I got the assignment to put this into kubernetes but that containers can not be deployed as it exits and then is considered a crashloop.
So the next question is if it can be put in a job. The job runs and gets restarted every time a request comes in over the proxy. Is that possible? Can job be restarted externally with different parameters in kubernetes?
So the next question is if it can be put in a job.
If it is supposed to just run once, a Kubernetes Job is a good fit.
The job runs and gets restarted every time a request comes in over the proxy. Is that possible?
This can not easyli be done without external add-ons. Consider using Knative for this.
Can job be restarted externally with different parameters in kubernetes?
Not easyli, you need to interact with the Kubernetes API, to create a new Job for this, if I understand you correctly. One way to do this, is to have a Job with kubectl-image and proper RBAC-permissions on the ServiceAccount to create new jobs - but this will involve some latency since it is two jobs.

Make Jenkins node offline if job passes

Is there any script or plugin so that I can take Jenkins node offline automatically if the job passes.
Or if i explicitly give choice parameter to take node offline after current build.
I mentioned before an API for disconnecting a node: see more at "Monitor and Restart Offline Slaves".
That means you can add a post-build action, which would call this api though curl.

How do I check for build status, running Jenkins jobs from a BitBucket pipeline?

We're using BitBucket to host our Git repositories.
We have defined build jobs in a locally-hosted Jenkins server.
We are wondering whether we could use BitBucket pipelines to trigger builds in Jenkins, after pull request approvals, etc.
Triggering jobs in Jenkins, through its REST API is fairly straightforward.
1: curl --request POST --user $username:$api_token --head http://jenkins.mydomain/job/myjob/build
This returns a location response header. By doing a GET on that, we can obtain information about the queued item:
2: curl --user $username:$api_token http://jenkins.mydomain/queue/item/<item#>/api/json
This returns JSON describing the queued item, indicating whether the item is blocked, and why. If it's not, it includes the URL for the build. With that, we can check the status of the build, itself:
3: curl -–user $username:$api_token http://jenkins.mydomain/job/myjob/<build#>/api/json
This will return yet more json, indicating whether the job is currently building, and if it's completed, whether the build succeeded.
Now BitBucket pipeline steps run in Docker containers, and have to run on Linux. Our Jenkins build jobs run on a number of platforms, not all of which are Linux. But BitBucket shouldn't care. Making the necessary REST API calls can be done in Linux, as I am in the examples above.
But how do we script this?
Do we create a single step that runs a shell script that runs command #1, then repeatedly calls command #2 until the build is started, then repeatedly calls command #3 until the build is done?
Or do we create three steps, one for each? Do BitBucket pipelines provide for looping on steps? Calling a step, waiting for a bit, then calling it again until it succeeds?
I think you should either use Bitbucket pipeline or Jenkins pipeline. Using both will give you to many options and make the project more complex than it should be.

jenkins slave runs as user

I have a jenkins setup with multiple users which are logging in with Active Directory plugin. This is useful so that each user can access his own tasks.
However each user also has different permissions on the local network, such as access to different folders etc. I have noticed that the permissions given to each task is not linked to the user but to the account under which the slave is running as service. Is there a way to change that so that the task is executed on the slave under the credential (and hence permissions) of the user?
Thank you
The problem is: there is only one slave process running the different job assigned to that server by the Jenkins master.
So the slave itself runs as one user (generally, a dedicated account or a system account).
Since you can get the user id as environment variable (with a plugin like JENKINS Build User Vars Plugin), you might consider configuring the job in order for it build step to "run as" the user who triggered the build.
See for instance the JENKINS Authorize Project plugin.
However, as mentioned this answer:
The "Authorize Project" plugin does not change the OS level user that is running commands.
It only sets the Jenkins user that is running the job and any downstream jobs, using Jenkins authentication (whatever it might be).
So you are left with build step with runas or su -c commands in order to be sure that your task does run with the right user.
I had the similar issue and I can recall for managing more control on projects I used role strategy plugin and setup global security using LDAP servers (Active directory should also be ok).
And I used authorized project plugin.
Have a look and I hope it should solve your purpose. Let me know on comment section for any clarification.
you can partially fix your problem this way:
install the slave as a service using the Java Web Start method and JLNP
go to Services control panel in windows
under Properties -> Connection replace the local system connection with a specific user
rebooted the service
This at least gives you the ability to use one account instead of system.

Jenkins kill all child processes

I have a jenkins job that runs a bash script.
In the bash script I perform effectively two actions, something like
java ApplicationA &
PID_A=$!
java ApplicationB
kill $PID_A
but if the job is manually aborted, the ApplicationA remains alive (as can be seen with a ps -ef on the node machine). I cannot use trapping and so on, because that won't work if jenkins sends a 9 signal (trapping doesn't work for 9).
It would be ideal if this job could be configured to simply kill all processes that it spawns, how can I do that?
Actually, by default, Jenkins has a feature called ProcessTreeKiller which is responsible to make sure there are no processes left running after the job execution.
The link above explain how to disable that feature. Are you sure you don't have that disabled by mistake somehow?
Edit:
Following the comments by the author, based on the information about disabling ProcessTreeKiller, to achieve the inverse one must set the environment variable BUILD_ID to the build id of Jenkins job. This way, when ProcessTreeKiller looks through the running processes to kill, it will find this as well
export BUILD_ID=$BUILD_ID
You can also use the Build Result Trigger plugin, configure a second job to clean up your applications, and configure it to monitor the first job for ABORTED state as a trigger.

Resources