Is there any way to run a single jenkins job parallelly without going to a queue. Currently I have a jenkins job which might be triggering from bitbucket in accordance with each pull request, currently all build triggers are going to a queue and pull request taking so much time. I have multiple executes configured but it always going for a queue. Please note that this build is completely happening in docker container in separate container instance.
Related
I came across /quietDown URL jenkinsQuietDown to pause a Jenkins Instance. I want to know if it can be used to only pause a job without impacting other running jobs on the instance.
I want to pause a particular job on failure, and restart from the top of the queue after resolution. This is to maintain order of execution of the job for different parameters.
The jenkinsQuietDown command work on jenkins level, not on job level.
quietDown: Put Jenkins in a Quiet mode, in preparation for a restart. In that mode Jenkins don’t start any build
We have a docker container which is a CLI application, it runs, does it s things and exits.
I got the assignment to put this into kubernetes but that containers can not be deployed as it exits and then is considered a crashloop.
So the next question is if it can be put in a job. The job runs and gets restarted every time a request comes in over the proxy. Is that possible? Can job be restarted externally with different parameters in kubernetes?
So the next question is if it can be put in a job.
If it is supposed to just run once, a Kubernetes Job is a good fit.
The job runs and gets restarted every time a request comes in over the proxy. Is that possible?
This can not easyli be done without external add-ons. Consider using Knative for this.
Can job be restarted externally with different parameters in kubernetes?
Not easyli, you need to interact with the Kubernetes API, to create a new Job for this, if I understand you correctly. One way to do this, is to have a Job with kubectl-image and proper RBAC-permissions on the ServiceAccount to create new jobs - but this will involve some latency since it is two jobs.
This question could seem similar to other questions but I tried everything and nothing worked for me that's why I'm asking specific question for my case.
I am running Jenkins jobs using Pull request builder in a master - worker system. I have 2 worker with 2 executors each. Master doesn't have any executor. I have 2 freestyle jobs A and B. My plan is to run job A and B concurrently (whenever PR is open/modified) in a worker node but I cannot run my job A/B concurrently in a node. Currently, by default, my job A is tied to one worker node and job B is tied to another worker node. All other jobs are in queue which delays my test execution.
I looked around for different plugins - Node and Label parameters, distributed builds, job restrictions. I tried labelling the jobs and node but jobs didn't trigger. So, not sure what's the problem as I don't see any errors in logs may be I'm not using plugins in a proper way. Can someone please let me know what is the good way of the dealing my situation?
I want to upgrade my jenkins master without aborting or waiting for long running jobs to finish on slaves. Is there a plugin available that provides this feature?
We have several build jobs running regression and integration tests which take hours to run. Often, at least one of those jobs is running, making it hard to restart jenkins after updates. I know, that it is poosible to block the queue. We tried this, but it hinders more than it helps.
What we are looking for is a plugin, that runs jobs on slaves, caches the output as soon as the connection to the master is interrupted and sends the remaining output to the master when the master is up again. Does anybody know a plugin providing this feature.
I have a couple of unit testing / BDD jobs on our Jenkins instance that trigger a bunch of processes as they run. I have multiple Windows slaves, any one of which can run my tests.
After the text execution is complete, irrespective of the build status is passed/failed/unstable, I want to run "taskkill" and kill a couple of processes.
I had been doing that earlier by triggering a "Test_Janitor" downstream job - but this approach doesn't work anymore since I added more than one slave.
How can I either run the downstream job on the same slave as the upstream, or have some sort of a post build step to run "taskkill".
You can install the Post Build Task plugin to call a batch script on the slave (when your UT/BDD are completed).
The other solution is to call a downstream job and to pass the %NODE_NAME% variable to this job with the Parameterized Trigger plugin.
Next, you can use psexec to kill the processes on the relevant node.