Looking for an examples of "request as a process" in erlang - erlang

In One major difference - ZeroMQ and Erlang author mentions briefly "request as a process" idea. I'm new to Erlang and I'd like to see an example or an article how to do it.
Any resource or hint will be highly appreciated.

I am a newbie too but to me the idea seems simple. For each request, whatever it is, spawn a new process and let it handle the request. That is all. This guy talks about that too: http://vimeo.com/19806728
So my understanding is that when you receive a request, you spawn a process by calling spawn(Module, Name, Args), or another variant of this function (see http://www.erlang.org/doc/reference_manual/processes.html) and pass the request data in the Args list. Module and Name identify a function that is executed when the process starts and that deals with the Args.

Related

How to »present« in SnakeYaml?

I'm trying to use SnakeYAML for stream processing of (big) YAML documents.
(Context)
Currently, I'm stuck with the »present« step. It seems that the »present« process is not available in SnakeYAML, or at least I'm unable to find it, i.e. I can parse a string to Events, but I cannot put Events back together to a string
Have I overlooked a »present« process in SnakeYAML? Or is there some third party code out there that can perform the »present« step?
I don't have enough memory to hold a full Node graph.
I overlooked it the first few times because the void emit(Event) method signature confused me:
You can perform the Present process using the org.yaml.snakeyaml.emitter.Emitter class.

script variable should not be used for call processing?

sir,
I am trying to create stateful proxy in opensips 2.4.
I just wanted a variable to hold received message information and process it.
So i checked "core variable" in opensips manual.it says, script variable are process wise. So i should not use to hold header value in script value like $var(Ruri)=$ru?? it will be overwritten by other call??
$var(userName)=$rU;
$var(removePlus) = '+';
# Search the string starting at 0 index
if($(var(userName){s.index, $var(removePlus)})==0){
$rU=$(var(userName){s.substr,1,0});
}
$var variables are process-local, meaning that you can't share them with other SIP workers even if you wanted to! In fact, they are so optimized that their starting value will often be what the same process left-behind during a previous SIP message processing (tip: you can prove this by running opensips with children = 1 and making two calls).
On the other hand, variables such as $avp are shared between processes, but not in a "dangerous" way that you have to worry about two INVITE retransmissions processing in parallel, each overwriting the other one's $avp, etc. No! That is taken care of under the hood. The "sharing" only means that, for example, during a 200 OK reply processed by a different process than the one which relayed the initial INVITE, you will still be able to read and write to the same $avp which you set during request processing.
Finally, your code seems correct, but it can be greatly simplified:
if ($rU =~ "^+")
strip(1);

How to check if queue with auto-generated name (amq.gen-*) exists?

In case of non-generated names it's enough to call #'queue.declare' to get newly created queue or existing one with given name. However, when using auto-generated names (beginning with amq.gen- prefix) it's not as trivial. First of all, amq. is restricted prefix, so there is no way to call #'queue.declare'{queue=<<"amq.gen-xxx">>}.
I also tried to play with passive=true option and although I may pass restricted name, I get an exit error when queue does not exists. Following is error report:
** Handler sse_handler terminating in init/3
for the reason exit:{{shutdown,
{server_initiated_close,404,
<<"NOT_FOUND - no queue 'amq.gen-wzPK0nIBPzr-dwtZ5Jy58V' in vhost '/'">>}},
{gen_server,call,
[<0.62.0>,
{call,
{'queue.declare',0,
<<"amq.gen-wzPK0nIBPzr-dwtZ5Jy58V">>,
true,false,false,false,false,[]},
none,<0.269.0>},
infinity]}}
Is there any way to solve this problem?
EDIT: Here is a short story behind this question. Disclaimer: I'm erlang newbie, so maybe there is better way to make it working :)
I have a gen_server based application holding SSE (server-side events) connections with web browsers. Each connection is bound to rabbitmq queue. SSE connection when broken, automatically tries to reconnect after given timeout - this is something that web browser supports out of the box. To reuse previously created queue I'm trying to check if queue of given name (taken from request cookie) already exists. It's all done in init callback.
You can declare a queue with the prefix amq. if the queue already exists. You would get Declare-Ok if the queue exists or access-refused if not. (My question is why would you, though? ;)
Furthermore, you can use the passive option to check if it already exists. According to AMQP reference the server treats it as not-found error if the queue doesn't exist. In order to catch this in your Erlang client you could try something around the lines of this:
try
%% declare queue with passive=true
queue_exists
catch exit:{{shutdown, {server_initiated_close,404,_},_,_} ->
queue_does_not_exist
end

Coroutines, multiple requests in Lua

I've been poring over this subject for the past 12 hours, and I simply cannot seem to get anywhere. I do not even know if this is possible, but I'm hoping it is because it would go a long way to continuing my project.
What I am attempting to do is create coroutines so the particular program I use does not freeze up due to its inability to perform asynchronous http requests. I've figured out how to do that part, even though my understanding of coroutines is still in the "Huh? How does that work?" phase. My issue now is being able to respond to multiple requests with the correct information. For instance, the following should produce three separate responses:
foo(a)
foo(b)
foo(c)
where foo initiates a coroutine with the parameters inside. If all requested separately, then it returns the proper results. However, if requested as a block, it will only return foo(c)'s result. Now, I understand the reasoning behind this, but I cannot find a way to make it return all three results when requested as a block. To help understand this problem a bit, here's the actual code:
function background_weather()
local loc = url.escape(querystring)
weatherpage = http.request("http://api.wunderground.com/api/004678614f27ceae/conditions/q/" .. loc .. ".json")
wresults = json.decode(weatherpage)
--process some stuff here, mainly datamining
end
--send datamined information as a response
coroutine.yield()
end
And the creation of the coroutine:
function getweather ()
-- see if backgrounder running
if background_task == nil or
coroutine.status (background_task) == "dead" then
-- not running, create it
background_task = coroutine.create (background_weather)
-- make timer to keep it going
AddTimer ("tickler", 0, 0, 1, "",
timer_flag.Enabled + timer_flag.Replace,
"tickle_it")
end -- if
end -- function
The querystring variable is set with the initial request. I didn't include it here, but for the sake of testing, use 12345 as the querystring variable. The timer is something that the original author of the script initialized to check if the coroutine was still running or not, poking the background every second until done. To be honest, I'm not even sure if I've done this correctly, though it seems to run asynchronously in the program.
So, is it possible to receive multiple requests in one block and return multiple responses, correctly? Or is this far too much a task for Lua to handle?
Coroutines don't work like that. They are, in fact, blocking.
The problem coroutines resolve is "I want to have a function I can execute for a while, then go back to do other thing, and then come back and have the same state I had when I left it".
Notice that I didn't say "I want it to keep running while I do other things"; the flow of code "stops" on the coroutine, and only continues on it when you go back to it.
Using coroutines you can modify (and in some cases facilitate) how the code behaves, to make it more evident or legible. But it is still strictly single-threaded.
Remember that what Lua implements must be specified by C99. Since this standard doesn't come with a thread implementation, Lua is strictly single-threaded by default. If you want multi-threading, you need to hook it to an external lib. For example, luvit hooks Luajit with the libuv lib to achieve this.
A couple good references:
http://lua-users.org/wiki/CoroutinesTutorial
http://lua-users.org/wiki/ThreadsTutorial
http://lua-users.org/wiki/MultiTasking
http://kotisivu.dnainternet.net/askok/bin/lanes/comparison.html
Chapter 9.4 of Programming in Lua contains a fairly good example of how to deal with this exact problem, using coroutines and LuaSocket's socket.select() function to prevent busylooping.
Unfortunately I don't believe there's any way to use the socket.http functions with socket.select; the code in the PiL example is often all you'll need, but it doesn't handle some fairly common cases such as the requested URL sending a redirect.

Resolving a dead lock between two gen_tcp

While browsing the code of an erlang application, I came across an interesting design problem. Let me describe the situation, but I can't post any code because of PIA sorry.
The code is structured as an OTP application in which two gen_server modules are responsible for allocating some kind of resources. The application runs perfectly for some time and we didn't really had big issues.
The tricky part begins when one the first gen_server need to check if the second have enough resources left. A call is issued to the second gen_server that itself call a utility library that (in very very special case) issue a call to the first gen_server.
I'm relatively new to erlang but I think that this situation is going to make the two gen_server wait for each other.
This is probably a design problem but I just wanted to know if there is any special mechanism built into OTP that can prevent this kind of "hangs".
Any help would be appreciated.
EDIT :
To summaries the answers : If you have a situation where two gen_servers call each other in a cyclic way you'd better spend some more time in the application design.
Thanks for your help :)
This is called a deadlock and could/should be avoided at a design level. Below is a possible workaround and some subjective points that hopefully helps you avoid doing a mistake.
While there are ways to work around your problem, "waiting" is exactly what the call is doing.
One possible work around would be to spawn a process from inside A which calls B, but does not block A from handling the call from B. This process would reply directly to the caller.
In server A:
handle_call(do_spaghetti_call, From, State) ->
spawn(fun() -> gen_server:reply(From, call_server_B(more_spaghetti)) end),
{noreply, State};
handle_call(spaghetti_callback, _From, State) ->
{reply, foobar, State}
In server B:
handle_call(more_spaghetti, _From, State) ->
{reply, gen_server:call(server_a, spaghetti_callback), State}
For me this is very complex and superhard to reason about. I think you even could call it spaghetti code without offending anyone.
On another note, while the above might solve your problem, you should think hard about what calling like this actually implies. For example, what happens if server A executes this call many times? What happens if at any point there is a timeout? How do you configure the timeouts so they make sense? (The innermost call must have a shorter timeout than the outer calls, etc).
I would change the design, even if it is painful, because when you allow this to exist and work around it, your system becomes very hard to reason about. IMHO, complexity is the root of all evil and should be avoided at all costs.
It is mostly a design issue where you need to make sure that there are no long blocking calls from gen_server1. This can quite easily be done by spawning a small fun which takes care of your call to gen_server2 and the delivers the result to gen_server1 when done.
You would have to keep track of the fact that gen_server1 is waiting for a response from gen_server2. Something like this maybe:
handle_call(Msg, From, S) ->
Self = self(),
spawn(fun() ->
Res = gen_server:call(gen_server2, Msg),
gen_server:cast(Self, {reply,Res})
end),
{noreply, S#state{ from = From }}.
handle_cast({reply, Res}, S = #state{ from = From }) ->
gen_server:reply(From, Res),
{noreply, S#state{ from = undefiend}.
This way gen_server1 can serve requests from gen_server2 without hanging. You would ofcourse also need to do proper error propagation of the small process, but you get the general idea.
Another way of doing it, which I think is better, is to make this (resource) information passing asynchronous. Each server reacts and does what it is supposed to when it gets an (asynchronous) my_resource_state message from the other server. It can also prompt the other server to send its resource state with an send_me_your_resource_state asynchronous message. As both these messages are asynchronous they will never block and a server can process other requests while it is waiting for a my_resource_state message from the other server after prompting it.
Another benefit of having the message asynchronous is that servers can send off this information without being prompted when they feel it is necessary, for example "help me I am running really low!" or "I am overflowing, do you want some?".
The two replies from #Lukas and #knutin actually do do it asynchronously, but they do it by a spawning a temporary process, which can then do synchronous calls without blocking the servers. It is easier to use asynchronous messages straight off, and clearer in intent as well.

Resources