How to catch a signal multiple times in BonitaSoft? - business-process-management

I'm calling from Signal5 to Signal2 until the number of calls is 10 (XOR Gateway).
But the problem is that Signal3 call Signal4 two times and the process is lost.
I think that the first pool is waiting for some action or the signals can be called only once.

As there is no action running in parallel maybe a design with a call activity (Step 3 on my diagram below) might answer what you want to achieve:

Related

Erlang/OTP How to notify parent process that child processes are idle and no messages in their mailbox

I would like to design a process hierarchy where there is a a parent process P which acts like a gatekeeper and delegates the work(messages/events from its client processes) to it's children processes C1,C2..Cn which collaborate with each other and may send the result back to P. The children processes cannot talk to any process outside, only P.
The challenge is that though P may have multiple messages from its clients, it should accept only one message, delegate the work to C1..Cn and ONLY accept the next message from its clients
when all its children processes are done(or idle) and there are no more messages circulating between C1 to Cn.
P finishes accepting messages from C1..Cn so that it can return the result to its clients
Constraints:
Idle for me is when they are waiting with a receive (blocking) or simply exited.
C1 to Cn are finite state machines. Some or all of them may send messages back to C. Or there may be no messages to be sent back to C. Even if no messages are sent back to C, C has to figure out that all of them are done with no messages between them.
If any of C1 to Cn have been pre-empted, then it is considered busy(this may be obvious but I thought I'll put it here for completion) and C will not receive the next message
Is there an OTP pattern or library which will do this for me (before I hack something?). I know that process_info can let me know if the mailbox of a process are empty and I could keep on checking the children's mailboxes from P but it would be unnecessary polling from P.
EDIT GENERAL: I am trying to implement a reactive variant of Flow Based Programming on the Erlang platform. This has the notion of 'hierarchical processes' or composites which themselves may contain composite processes until we reach some boxes of actual code...I am going to research(looking at monitor,process_info,process_flag) but I wanted to respond to your excellent answers
EDIT RECURSIVE PARENTS: Each of C1 and Cn can themselves be parent/composite processes. If I just spawn processes and let them exit immediately, I'll have to create the chain of Composites everytime as C1..Cn may themselves be composites (which spawn composites..and so on). Finally, when we reach a leaf box(which is not a composite node), they are supposed to be finite state machines.. so I'm not sure of spawning and making them exit quickly if the are FSMs.
EDIT TKOWAL: Since I am trying to create a generic parent/composite process, it does not know 'when' the task ends. All it does is relay the messages it receives from its children to it's siblings with the 'constraint' that it will not accept the next message from its client/siblings until its children are 'done'. The children C1..Cn may send not just one but many messages. I understand from your proposal, that wait_for_task_finish will stop blocking the moment it gets the first message. But more messages may be emitted too by P's children. P should wait for all messages. Also, having a task_end symbol will not work for the same reason(i.e. multiple messages possible from the children)
Given how inexpensive it is to start up Erlang processes, your gatekeeper could start new children for each incoming task, and then wait for them all to exit normally once they complete their work.
But in general, it sounds like you're looking for a process pool. There are a few of these already available, such as poolboy and sidejob. Pools can be harder to get right than you think, so I advise using an existing proven pool implementation before attempting to write your own.
After edits, this became entirely different question, so I am posting new answer.
If you are trying to write Flow Based Programming, then you are probably solving wrong problem. FBP is effective, because almost everything is asynchronous and you start processing next request immediately after you finished with previous one.
So, the answer is - don't wait for children to finish:
In FBP, there is no time dependencies between the components. So if I
have a chunk of data, it should be able to flow from one end of the
diagram to the other regardless of how any other pieces of data are
being handled. In order to program an FBP system, you have to minimize
your dependencies.
source
When creating parent and children, you know all the connections between blocks, so just configure children to send processed data directly to next block. For example: P1 has children C1 and C2. You send message to P1, it delegates it to C1, packet flows couple of times between C1 and C2 and after that, C1 or C2 sends it directly to P2.
Blocks should be stateless. They output should not depend on previous requests, so even if C1 and C2 are processing data from two different requests to P1 - it is OK. There could be situations, where P1 gets data packet D1 and then D2, but will output answers in different order R2 and then R1. It is also OK. You can use Erlang reference to tag messages and then check, which response is from which request.
I don't think, there is ready library for that, but it is really easy to hack, unless I missed something. Your P process should look like this:
ready_for_next_task() ->
receive
{task, Task, CallerPid} ->
send_task_to_workers(Task)
end,
wait_for_task_finish(CallerPid).
wait_for_task_finish(CallerPid) ->
receive
{task_end, Response} ->
CallerPid ! Response
end,
ready_for_next_task().
In wait_for_task_finish/1 you have only one clause for receive, so it will not accept next task, until current one is finished. If you are waiting for multiple responses from workers, you can simply add second clause to receive with some partial response and call wait_for_task_finish/1 recursively.
It is always better to have some indicator, that the processing ended, because you don't have guarantees on message delivery time. This means, that you could check, that all processes currently are waiting for message and think, that they ended processing, but actually, they did not started yet or one of them send message to other and you caught them before the second one had it in message box.
If the processes C1..Cn have only parts of actual work and don't know about the progress, than the gatekeeper P should know how many parts there were, receive all of them one by one and then call ready_for_next_task/1.

NSTimer to control a series of events

Can NSTimer be used to fire a series of events. For instance for effect:
Its kick off Click start to toss
create random number
wait 5 seconds show result
wait 3 seconds start the match?
You can use it to repeat at a given interval, but not a variable one. If you really wanted to wait 5 seconds and then then wait another 3 seconds you'd probably want to chain timers. So, when the first timer fires and calls a message, that message creates a second timer with a different time interval.
This is actually a case where the Prototype Pattern would apply: make an NSTimer and set it up with all the properties you want, and then clone that object each time you need to make another. Or you could just make a factory. Objective C does not have a clone, but the NSCoding protocol is actually a workable and proper way of doing cloning (unlike Java's broken (and abandoned) clone interface).

Parse list 3 threads a time, when 5 completed works, server signal to do something

Hy I am curious does anyone know a tutorial example where semaphores are used for more than 1 process /thread. I'm looking forward to fix this problem. I have an array, of elements and an x number of threads. This threads work over the array, only 3 at a moment. After 5 works have been completed, the server is signelised and it clean those 5 nodes. But I'm having problems with the designing this problem. (node contains worker value which contains the 'name' of the thread that is allowed to work on it, respectivly nrNodes % nrThreads)
In order to make changes on the list a mutex is neccesarly in order not to overwrite / make false evaluations.
But i have no clue how to limit 3 threads to parse, at a given point, the list, and how to signal the main for cleaning session. I have been thinking aboutusing a semafor and a global constant. When the costant reaches 5, the server to be signaled(which probably would eb another thread.)
Sorry for lack of code but this is a conceptual question, what i have written so far doesn't affect the question in any way.

A "Temporal Pass" Module

I am trying to create a general module that collects data at an irregular interval. Data arrives from the left end as soon as new data has arrived. This may be something like 100 times a second.
On the right end I want to be able to "plug in" n listeners, each with its own regular interval. For the purpose of simplification, let's say all with an interval of once per second.
Every listener registers a callback function that may or may not be asynchronous.
My problem is that if the callback function is synchronous, my "temporal pass" may hang. What is the best way to approach this? Should I spawn a process whose pure purpose is to pass along the data and pay the price if the callback hangs?
+-------------+ Data Out 1
=======> |Temporal Pass| ==========>
Data In +-------------+ \\ Data Out 2
++=======>
\\ Data Out n
++=======>
Spawn a new process for the message, otherwise the process will wait until synchronous calls are done. This is exactly the sort of problem the process model is meant to solve and I do not see any other way to do it.
Spawning processes are not expensive, but not entirely free either. You may get a small performance boost by only spawning new processes for the synchronous calls. That will require some way of flagging each callback as either synchronous or asynchronous.

What is the BEST-WORST case time to add and remove elements from a queue that has been implemented with the help of 2 stacks

I recently came across this question
What is the BEST-WORST case time to add and remove elements from a queue that has been implemented with the help of 2 stacks.
could not come up with great answer... can you all suggest!!
Well... I honestly can't see a WORST-BEST scenario here.. the best scenario is when you only have one element on the list for both the queue and the stack will return the same thing (disregarding of course an empty queue =D).
But this operation, for must cases, will need to perform two operations on each element of the stack for each "push" to the queue... which means you get O(2n).... And this operation is lineal so... the more elements you have the worse it gets and gets...

Resources