I have 2 parallel foreach loops and both of those have a script task for file writing. Now when loop A finishes its script it stops also loop B which was not done.
What shout I do so, that each loop would be allowed to run untill finished?
I had used the same resultset object for both scripts. This causes the object to lock up, when one of those succeeds/finishes. So always create a new object per sql task resultset or this will occur.
Related
This is my code
https://pastebin.com/fnZreFKA
I have tried all the coroutine options, I have print statements at the start of each of the two functions, it prints, but it doesn't do anything in the loop
coroutine.wrap(constantWork)()
coroutine.wrap(lookForKeys)()
The loops start after line 170
Because they are not detached threads, they are green synchronous threads, only one of them will run the code at the time.
To simulate multitasking you forgot to use yield. coroutine.yield pauses the thread and runs the next code after you called the coroutine. You can resume the coroutine later on by calling wrapped coroutine again or using coroutine.resume if you created it using coroutine.create.
Read the documentation here: https://www.lua.org/pil/9.html
coroutine.wrap creates a new coroutine based on the function you passed it, and then creates a new function based on the coroutine. The first time you call it, it calls the original function until it yields. The next time, it returns from the yield and runs until the next yield. And so on.
In ComputerCraft, yielding is the same as waiting for an event.
ComputerCraft comes with the parallel library which runs two or more functions as coroutines in parallel. It does all the work for you.
You can use parallel.waitForAll or parallel.waitForAny, depending on when you want it to return.
Usage: parallel.waitForAll(constantWork, lookForKeys)
)
I'm facing a problem and hope you'll be able to give me hand :-)
The problem
I'm trying to write a pipeline like the one below:
parallel(
"task A": { build some stuff and run verifications},
"task B": { build more stuff. Wait for task A to be finished. Run tests.},
"task C": { build some more stuff. Wait for task A to be finished. Run tests.}
)
My problem is I can't find a way to wait for the completion of task A.
Things I've tried
Store the result of the build
In "task A", I would run the job like this: job_a = build job: "Job_A"
Then in task B and C, I would use the attributes of "job_a".
Unfortunately this doesn't work as I get an error because job_a is not defined (in the scope of task B ans C). There might a forks happening when using "parallel".
I also tried defining "job_a" before the parallel block and still assign the job to it in "task A" but this did not work either as in task B and task C, job_a would only have the value that was first defined.
Schedule task A outside the parallel block
I also tried scheduling the job directly before the parallel block.
I would get a job object and then directly run job.scheduleBuild2.
Here again no success.
Any idea how to do this?
The main reasons I would like to set up the pipeline this way is:
All these jobs run on slaves (most likely different).
If task A is finished, and the build of task B is finished, the tests should start. Even if the build of task C hasn't finished yet.
Same if task C finishes before task B.
I'd be very grateful if you have an idea how to implement this :-)
More generally I'm also curious of how this all work behind the scenes.
Indeed, when running parallel several processes or threads must be used. How does the master keeps communicating with a slave during a build to update status etc.
Thanks a lot :-D
I tried to find a solution to your problem but I was only able to come up with something close to what your are asking for. As far as I am aware, parallel in Jenkinsfiles is currently implemented in a way which does not support communication between the different processes running in parallel. Each one of your parallel tasks is run in its own sandbox and therefor cannot access information about the other directly.
One solution could be the following:
A,B and C are started in parallel
B or C finishes its first stage and now need A to continue
Introduce a waiting stage into B and C
B and/or C poll the Jenkins remote api of A (http://jenkins/job/job.A/lastBuild/api/json) and look for the result entry
If result is null -> keep waiting, if result is SUCCESS -> continue, if result is FAILURE throw exception and so on
The obvious downside for this solution is, that you have to implement that stage and do actual HTTP calls to get the JSON responses.
Another Solution could be:
Split B and C into two jobs each
Run the first parts of B and C in parallel with A
Run the second part of B and C in parallel once the first parallel stage has finished
The downside here would be, that it is slower than the setup your wish for in your question. But it would be considerably less effort to implement.
How can I abort the whole test set's execution from within a script?
I have a library which, if it encounters certain circumstances, comes to the conclusion that further test execution does not make any sense. The "hardest" abort I know is ExitTest, but it only aborts the current test's execution, not the whole test set.
I understand I could map this to test dependencies in the test set, but those should be used only to model business-driven dependencies between tests, to coordinate parallel test execution, as opposed to the global abort I am looking for and which can happen anytime, in any test (i.e. deep, deep in library code). I certainly don't want to depend all tests on their predecessor tests' passed/failed status just for this. And it also would lead to other "branches" of the dependency tree being executed anyways.
So how can I abort the complete test set execution programmatically?
Well.....you could set a flag value as EXIT,before doing exit test.....and either return this flag to the calling function or driver script/function...and if that's not possible, you could write the flag value into a temporary file, and make ur driver script read this file before it moves to the next Test set.....
I need some help with the following issue:
Inside a foreach loop, I have a Data Flow Task that reads each file from the collection folder. If it fails while proccesing a certain file, that file is copied to an error folder (using a file system task called "Copy Work to Error").
I would like to set up a Send Email Task that warns me if there were any files sent to the error folder during the package execution. I could easily add this task after the "Copy Work to Error" task, but if there are many files that fail the Data Flow Task, my inbox would get filled.
Instead, I would like the Send Mail Task only once (after the foreach loop completes) only if the "Copy Work to Error" task was executed at least once. Is there any way I could achieve that?
Thanks,
Ovidiu
Here's one way that I can think of:
Create an integer variable, #Total outside the ForEach container and set it to 0.
Create an integer variable, #PerIteration inside the ForEach container.
Add a Script Task as an event handler to the File System Task. This task should increment #Total by #PerIteration.
Add your SendMail task after the ForEach container. In the precedence constraint, set type to Expression, and specify the condition #Total > 0. This should ensure that your task is triggered only if the File System Task was executed in the loop at least once.
You could achieve this using just a boolean variable say IsError created outside the scope of the for each loop with default value as False. You can set this to True immediately after the success of Copy Work to Error task using an expression task(SSIS 2012) or an Execute SQL task. And finally your Send Mail task would be connected to the For Each loop with the precedence constraint set as the Expression - isError.
When the error happens, create a record in a table with the information you would like to include in the email - e.g.
1. File that failed with full path
2. the specific error
3. date/time
Then at the end of the package, send a consolidated email. This way, you have a central location to turn to in case you want to revisit the issue, or if the email is lost/not delivered.
If you need implementation help, please revert back.
exe should run when i am open the page. Asynchronous process need to run.
Is there any way for run exe Asynchronously with two arguments in ruby?
i have tried ruby commands - system() , exec() but its waiting the process complete. i need to start exe with parameter no need to wait for the process complete
is any rubygems going to support for my issue?
You can use Process.spawn and Process.wait2:
pid = Process.spawn 'your.exe', '--option'
# Later...
pid, status = Process.wait2 pid
Your program will be executed as a child process of the interpreter. Besides that, it will behave as if it had been invoked from the command line.
You can also use Open3.popen3:
require 'open3'
*streams, thread = Open3.popen3 'your.exe', '--option'
# Later...
streams.each &:close
status = thread.value
The main difference here is that you get access to three IO objects. The standard input, output and error streams of the process are redirected to them, in that order.
This is great if you intend to consume the output of the program, or communicate with it through its standard input stream. Text that would normally be printed on a terminal will instead be made available to your script.
You also get a thread which will wait for the program to finish executing, which is convenient and intuitive.
exec switches control to a new process and never returns. system creates a subprocess and waits for it to finish.
What you probably want to do is fork and then exec to create a new process without waiting for it to return. You can also use the win32ole library which might give you more control.