GNU Parallel: Halt on success -or- failure - gnu-parallel

Is it possible to set a -halt condition (or multiple -halt conditions?) such that all jobs will be halted if any of them fail, regardless of the exit code?
I want to monitor for an event (that I just triggered, separately, on a load balanced service). I can identify if the event passed or failed by viewing the logs, but I have to view logs on multiple servers at once. Perfect! Parallel it! I have an extra requirement though: I want to return success or failure based on the log result.
So I want to stop the parallel jobs if any of them detect the event (i.e. "-halt now") but I don't know if the detect will return zero or non-zero (that's the point: I'm trying to find out that information) so neither "--halt now,success=1" nor "--halt now,fail=1" is correct, I need to figure out a way to do something like "--halt now,any=1")
I did a look through the source and, well, my perl Kung-fu is inadequate to tackle this (and it looks like exitstatus is used in many different places in the source, so it's difficult for me to figure out if this would be feasible or not.)
Note that ,success=1 and ,fail=1 both work perfectly (given the corresponding exit status) but I don't know if it will be success or fail before I run parallel.

The GNU Parallel manpage says:
--halt now,done=1
exit when one of the jobs finishes. Kill running jobs.
Source: https://www.gnu.org/software/parallel/man.html (search for --halt - it's a big page)

If you (as a human) are viewing the logs, why not use Ctrl-C?
If you simply want all jobs to be killed when the first finishes, then append true to your command to force it to become a success:
parallel -Sserver{1..10} --halt now,success=1 dosomething {}\;true ::: 1..100

Related

TFS build custom conditions for running a task - check if specific previous task has failed

TFS build allows to specify conditions for running a task: reference.
The condition I would like to define is: a specific task [addressed by name or other mean] has failed.
This is similar to Only when a previous task has failed, but I want to specify which previous task that is.
Looking at the examples I don't see any condition that is addressing a specific task outcome, only the entire build status.
Is it possible? any workaround to achieve this?
It doesn't seem like there's an out-of-the-box solution for this requirement, but I can come up with (an ugly :)) workaround.
Suppose your specific task (the one you examine in regards to its status) is called A. The goal is to call another build task (let's say B) only in case A fails.
You can do the following:
Define a custom build variable, call it task.A.status and set to success
Create another build task, e.g. C and schedule it right after A; condition it to only run if A fails - there's a standard condition for that
The task C should only do one thing - set task.A.status build variable to 'failure' (like this, if we are talking PowerShell: Write-Host "##vso[task.setvariable variable=task.A.status]failure")
Finally, the task B is scheduled sometime after C and is conditioned to run in case task.A.status equals failure, like this: eq(variables['task.A.status'], 'failure')
I might be incorrect in syntax details, but you should get the general idea. Hope it helps.

Two-stage Jenkins pipeline script doesn't lock onto needed nodes

I have a pipeline script where I want to kick off parallel builds on two different build machines, and once it's all done, perform some post-run activity like unstashing and publishing test results, creating an archive from all of the binaries and libraries generated, etc.
It basically looks like this, where 'master' is a MacOS machine and we've got a separate machine for Windows builds:
// main run stuff
parallel (
"mac" : {
node ('master') {
for (job in macJobs) {
job.do()
}
}
},
"windows" : {
node ('windowsMachine') {
for (job in windowsJobs) {
job.do()
}
}
}
}
node('master') {
// post-run stuff
}
If I kick off a single build with this script then it completes no problem.
But, if a second build kicks off while the first is still working through the parallel block (i.e. its polling SCM and someone did a push while the first build is still going), then the post-run block doesn't get executed until the second job's parallel block completes.
There's obviously a priority queue based on who gets to request the node first, but I'd like for one complete script run to finish before Jenkins moves on to the next, so we don't end up with jobs piling up on the post-run block which normally only takes a couple of seconds to complete...
How do I modify the script to do this? I've tried wrapping it all in a single stage block, but no luck there.
I might guess that part of the problem lies around your post-run stuff sharing your master node with one of your parallel tasks. Especially if your master node only has a one or two executors, which would definitely put it at 100% load with more than one concurrent build.
If this sounds like it might be part of your problem, you can try giving your post-run stuff a dedicated node to guarantee availability independent of triggered builds. Or increase the executors available on your master node to guarantee that even if there are a couple concurrent builds, there are still executors available for those post-runs.
Jenkins doesn't really care about the origin of a block to execute. So if you have two jobs running at the same time, and each uses the master node in two separate blocks. There is a real chance the first block of each job will execute together before either of their second block is reached. If your executor only has two executors available, then you may even end up with a starved queue for that node, but at the very least, an executor must become available before either of those second blocks can begin.

Orchestration of tasks ran in parallel blocks

)
I'm facing a problem and hope you'll be able to give me hand :-)
The problem
I'm trying to write a pipeline like the one below:
parallel(
"task A": { build some stuff and run verifications},
"task B": { build more stuff. Wait for task A to be finished. Run tests.},
"task C": { build some more stuff. Wait for task A to be finished. Run tests.}
)
My problem is I can't find a way to wait for the completion of task A.
Things I've tried
Store the result of the build
In "task A", I would run the job like this: job_a = build job: "Job_A"
Then in task B and C, I would use the attributes of "job_a".
Unfortunately this doesn't work as I get an error because job_a is not defined (in the scope of task B ans C). There might a forks happening when using "parallel".
I also tried defining "job_a" before the parallel block and still assign the job to it in "task A" but this did not work either as in task B and task C, job_a would only have the value that was first defined.
Schedule task A outside the parallel block
I also tried scheduling the job directly before the parallel block.
I would get a job object and then directly run job.scheduleBuild2.
Here again no success.
Any idea how to do this?
The main reasons I would like to set up the pipeline this way is:
All these jobs run on slaves (most likely different).
If task A is finished, and the build of task B is finished, the tests should start. Even if the build of task C hasn't finished yet.
Same if task C finishes before task B.
I'd be very grateful if you have an idea how to implement this :-)
More generally I'm also curious of how this all work behind the scenes.
Indeed, when running parallel several processes or threads must be used. How does the master keeps communicating with a slave during a build to update status etc.
Thanks a lot :-D
I tried to find a solution to your problem but I was only able to come up with something close to what your are asking for. As far as I am aware, parallel in Jenkinsfiles is currently implemented in a way which does not support communication between the different processes running in parallel. Each one of your parallel tasks is run in its own sandbox and therefor cannot access information about the other directly.
One solution could be the following:
A,B and C are started in parallel
B or C finishes its first stage and now need A to continue
Introduce a waiting stage into B and C
B and/or C poll the Jenkins remote api of A (http://jenkins/job/job.A/lastBuild/api/json) and look for the result entry
If result is null -> keep waiting, if result is SUCCESS -> continue, if result is FAILURE throw exception and so on
The obvious downside for this solution is, that you have to implement that stage and do actual HTTP calls to get the JSON responses.
Another Solution could be:
Split B and C into two jobs each
Run the first parts of B and C in parallel with A
Run the second part of B and C in parallel once the first parallel stage has finished
The downside here would be, that it is slower than the setup your wish for in your question. But it would be considerably less effort to implement.

Ant : how to always execute a task at the end of every run (regardless of target)

Is there a way to define a task in Ant that always gets executed at the end of every run? This SO question provides a way to do so, at the start of every run, before any other targets have been executed but I am looking at the opposite case.
My use case is to echo a message warning the user if a certain condition was discovered during the run but I want to make sure it's echoed at the very end so it gets noticed.
use a buildlistener, f.e. the exec-listener which provides a taskcontainer for each build result
( BUILD SUCCESSFUL | BUILD FAILED ) where you can put all your needed tasks in, see :
https://stackoverflow.com/a/6391165/130683
for details.
It's an interesting situation. Normally, I would say you can't do this in an automated way. You could wrap Ant in some shell script to do this, but Ant itself really isn't a full fledge programming language.
The only thing I can think of is to add an <ant> call at the end of each task to echo out what you want. You could set it up, that if a variable isn't present, the echo won't happen. Of course, this means calling the same target a dozen or so times just to get that final <echo>.
I checked through AntXtras and Ant-Contrib for possible methods, but couldn't find any.
Sorry.
Wrap your calls in the sequential container.
http://ant.apache.org/manual/Tasks/sequential.html

How to properly run a Symfony task in the background from an action?

$path=sfConfig::get('sf_app_module_dir')."/module/actions/MultiTheading.php";
foreach($arr as $id)
{
if($id)
passthru ("php -q $path $id $pid &");
}
when when i running action script is running sequenctly despite "&".
Please help
There are two common methods to achieve what you want.
Both involve creating a table in your database (kind of a to-do list). Your frontend saves work to do there.
The first one is easier, but it's only ok if you don't mind a slight latency. You start by creating a symfony task. When it wakes up (every 10/30/whatever minutes) it check that table if it has anything to do, simply exists if not. Otherwise it does what it needs to, then marks them as processed.
The second one is more work and more error-prone, but can work instantly. You create a task, that daemonizes itself when started (forks, forks again, and sets the parent pid to zero), then goes to sleep. If you have some work to do, you wake it up by sending a signal. Daemonizing and signal sending/receiving can be done with php's pcntl_* functions.

Resources