BPMN Task continue inside other process - task

I have 2 processes: decision and implementation. In my decision process, I have a gateway to check if it's an emergency or not. If it's an emergency then continue in the same decision process.
And here's the problem, if the task is not an emergency, then it continues to point A.
While point A will start in the middle of the implementation process. Please help me, what should I use?

You need to separate the implementation process into two subprocesses The process between the gateway and point A should be something like a Non-emergency process.

Related

Erlang supervisor processes

I have been learning Erlang intensively, and after finishing 'Programming Erlang' from Joe Armstrong, there is one thing that I keep coming back to.
In my mind a Supervisor spawns One process per child handler. So each declared gen_server type handler will run as a separate process.
What happens if you are building a tiny web server and you want each requests to be its own process. Do you still conform to OTP principles and use a gen_server somehow (how ?), or do you create your own behaviour?
How does Cowboy handle this for eg. ? Does it still use gen_server ?
tl;dr: I find that trying to figure out the "correct" supervision structure a the beginning of a project is a form of premature optimization.
The "right" way to design your supervision tree depends on what the worker parts of your application are doing. In the case of a web server I would probably first explore something along the lines of:
top supervisor (singular)
data service supervisor (one per service type)
worker pool (all workers under the service sup)
client connection supervisor (one)
connection worker pool (or one per connection, have to play with it to decide)
logical supervisor (as appropriate -- massive variance here, depending on problem domain)
workers or supervisors (as appropriate -- have to explore/know the problem domain to have any idea how this should be structured)
So that's several workers per supervisor type at the lower level. I haven't used Cowboy so I don't know how it is organized. The point I'm trying to make is that while the mechanics of handling data services serving web pages are relatively trivial, the part of the system that actually does the core problem-solving work might not be and this is going to dictate everything interesting about the system.
It is a bad thing to have your problem-solving bits mixed in the same module as your web-displaying or connection handling bits. Ideally you should be able to use the same logic units in a native application, a web application and a networked service without any changes.
Ultimately the answer to whether you should have 1:1 supervisors to workers or 1:n depends on what you're doing and what restart strategy gives you the best balance among recovery to a known consistent state, latency felt by the user, and resource usage.
One of my favorite things about Erlang is that I can start with a naive supervisor structure like the one above, play with it until I see where its not so good, and rather easily switch things around and experiment with alternatives without fundamentally altering my system much. (The same goes for playing with alternative data representations if you write proper abstractions around them.) So first, get something that works in testing. Then load it up and see if you can break it. Then start worrying about the details, after you understand where the problems actually are.
It is a common pattern to spawn one server per client in erlang, You will then use a supervisor using the simple_one_to_one strategy for the children servers. This allows to ask the server to start a server on_demand. Generally this is used when you don't know how many processes you will need, and when the processes are independent (a crash of one process should not impacts the other).
There is a very good information in the site learningyousomeerlang.com (LYSE supervisor chapter). the whole site is worth to read.

What function is to small for spawning into an own process

Where is the limit where there is no benefit of spawning a process to make a more parallelized function call?
For example when doing a recursive lookup in a tree structure, each child node would add a process and a message call to the parent just for a simple comparison.
Spawning process and do the work will be always slower than just do the work. It strongly depend on your exact requirements. Especially non-function requirements are the key. So go and do measurements. It's pretty easy. See documentation about Profiling for more details and there are also 3rd party projects easing benchmarking over there.
Spawning more processes won't necessarily make tasks run in parallel. For example, if you have a 24 cores on your system, only 24 processes can run at any one time.
Instead it might be good to think about how much work is being done when you examine a node in a tree. Lets say the node value represents a url which needs to be called to retrieve a value. In this case it might be a good idea to spawn a process for each node. This way a process can be scheduled to run while another process is waiting for an answer to the http request.

Identify core an Erlang process

Any way to identify the specific core an Erlang process is scheduled on?
Let's say you spawn a bunch of processes to simply print out the core the process is running on, and then exit. Any way to do this?
I spent some time reading docs and googling but couldn't find anything.
Thanks.
EDIT: "core" = CPU core number (or if not number, another identifier that identifies the CPU core).
There is erlang:system_info(scheduler_id) that in most cases is maped to a logical core. But this information is pretty ephemeral because the process may be suspended and resumed on any other scheduler.
What is your use case that you really need that kind of information?
No there is not. If you spawn 2000 processes and they terminate quickly, chances are that you will finish the job before rebalancing occurs. In this case you would only have a single core operating all the time.
You could take a look at the scheduler utilization calls however, see erlang:statistics(scheduler_wall_time). It will tell you how much work each scheduler is really doing.

How to fit non-event driven processes into supervision tree?

I want to be able to spawn a lot of processes that process data and fit them into a supervision tree. However all default behaviours, namely gen_server, gen_fsm, and gen_event, are event-driven. They have to receive messages to do stuff. What I need are just processes that process data, and in case they terminate abnormally, they should be restarted by their supervisor. What's the best way to go about doing this?
Yes, the standard behaviours all function as servers in that they sit and wait for requests before they do something. However, OTP is open in the sense that it provides the tools you need to implement processes which are not behaviours but which fit into the supervision trees and do "the right thing". For a description on what needs to be done and how to do it see the section on Special processes in the Erlang documentation.
This is really not surprising as all of OTP behaviours are implemented in Erlang so all the "tools" are there in the libraries.

Erlang: Who supervises the supervisor?

In all Erlang supervisor examples I have seen yet, there usually is a "master" supervisor who supervises the whole tree (or at least is the root node in the supervisor tree). What if the "master"-supervisor breaks? How should the "master"-supervisor be supervised?? any typical pattern?
The top supervisor is started in your application start/2 callback using start_link, this means that it links with the application process. If the application process receives an exit signal from the top supervisor dying it does one of two things:
If the application is started as an permanent application the entire node i terminated (and maybe restarted using HEART).
If the application is started as temporary the application stops running, no restart attempts will be made.
Typically Supervisor is set to "only" supervise other processes. Which mens there is no user written code which is executed by Supervisor - so it very unlikely to crash.
Of course, this cannot be enforced ... So typical pattern is to not have any application specific logic in Supervisor ... It should only Supervise - and do nothing else.
Good question. I have to concur that all of the examples and tutorials mostly ignore the issue - even if occasionally someone mentions the issue (without providing an example solution):
If you want reliability, use at least two computers, and then make them supervise each other. How to actually implement that with OTP is (with the current state of documentation and tutorials), however, appears to be somewhere between well hidden and secret.

Resources