I am learning Erlang and trying to figure out how I can, and should, save state inside a process.
For example, I am trying to write a program that given a list of numbers in a file, tells me whether a number appears in that file. My approach is to uses two processes
cache which reads the content of the file into a set, then waits for numbers to check, and then replies whether they appear in the set.
is_member_loop(Data_file) ->
Numbers = read_numbers(Data_file),
receive
{From, Number} ->
From ! {self(), lists:member(Number, Numbers)},
is_member_loop(Data_file)
end.
client which sends numbers to cache and waits for the true or false response.
check_number(Number) ->
NumbersPid ! {self(), Number},
receive
{NumbersPid, Is_member} ->
Is_member
end.
This approach is obviously naive since the file is read for every request. However, I am quite new at Erlang and it is unclear to me what would be the preferred way of keeping state between different requests.
Should I be using the process dictionary? Is there a different mechanism I am not aware of for that sort of process state?
Update
The most obvious solution, as suggested by user601836, is to pass the set of numbers as a param to is_member_loop instead of the filename. It seems to be a common idiom in Erlang and there is a good example in the fantastic online book Learn you some Erlang.
I think, however, that the question still holds for more complex state that I'd want to preserve in my process.
Simple solution, you can pass to your function is_member_loop(Data_file) the list of numbers rather then the file name.
The best solution when you deal with a state consists in using a gen_server. To learn more you should take a look at records and gen_server behaviour (this may also be useful).
In practice:
1) start with a module (yourmodule.erl) based on gen_server behaviour
2) read your file in the init function of the gen_server and pass it as state field:
init([]) ->
Numbers = read_numbers(Data_file),
{ok, #state{numbers=Numbers}}.
3) write a function which will be used to trigger a call to the gen_server
check_number(Number) ->
gen_server:call(?MODULE, {check_number, Number}).
4) write the code in order to handle messages triggered from your function
handle_call({check_number, Number}, _From, #state{numbers=Numbers} = State) ->
Reply = lists:member(Number, Numbers)},
{reply, Reply, State};
handle_call(_Request, _From, State) ->
Reply = ok,
{reply, Reply, State}.
5) export from yourmodule.erl function check_number
-export([check_number/1]).
Two things to be explained about point 4:
a) we extract values inside the record State using pattern matching
b) As you may see I left the generic handle call, otherwise your gen_server will fail due to wrong pattern matching whenever a message different from {check_number, Number} is received
Note: if you are new to erlang, don't use process dictionary
Not sure how idiomatic this is, since I'm not exactly an Erlang pro yet, but I'd handle this by using ETS. Basically,
read_numbers_to_ets(DataFile) ->
Table = ets:new(numbers, [ordered_set]),
insert_numbers(Table, DataFile),
Table.
insert_numbers(Table, DataFile) ->
case read_next_number(DataFile) of
eof -> ok;
Num -> ets:insert(numbers, {Num})
end.
you could then define your is_member as
is_member(TableId, Number) ->
case ets:match(TableId, {Number}) of
[] -> false; %% no match from ets
[[]] -> true %% ets found the number you're looking for in that table
end.
Instead of taking a Data_file, your is_member_loop would take the id of the table to do a lookup on.
Related
I am learning Erlang from a Ruby background and having some difficulty grasping the thought process. The problem I am trying to solve is the following:
I need to make the same request to an api, each time I receive a unique ID in the response which I need to pass into the next request until there is not ID returned. From each response I need to extract certain data and use it for other things as well.
First get the iterator:
ShardIteratorResponse = kinetic:get_shard_iterator(GetShardIteratorPayload).
{ok,[{<<"ShardIterator">>,
<<"AAAAAAAAAAGU+v0fDvpmu/02z5Q5OJZhPo/tU7fjftFF/H9M7J9niRJB8MIZiB9E1ntZGL90dIj3TW6MUWMUX67NEj4GO89D"...>>}]}
Parse out the shard_iterator..
{_, [{_, ShardIterator}]} = ShardIteratorResponse.
Make the request to kinesis for the streams records...
GetRecordsPayload = [{<<"ShardIterator">>, <<ShardIterator/binary>>}].
[{<<"ShardIterator">>,
<<"AAAAAAAAAAGU+v0fDvpmu/02z5Q5OJZhPo/tU7fjftFF/H9M7J9niRJB8MIZiB9E1ntZGL90dIj3TW6MUWMUX67NEj4GO89DETABlwVV"...>>}]
14> RecordsResponse = kinetic:get_records(GetRecordsPayload).
{ok,[{<<"NextShardIterator">>,
<<"AAAAAAAAAAFy3dnTJYkWr3gq0CGo3hkj1t47ccUS10f5nADQXWkBZaJvVgTMcY+nZ9p4AZCdUYVmr3dmygWjcMdugHLQEg6x"...>>},
{<<"Records">>,
[{[{<<"Data">>,<<"Zmlyc3QgcmVjb3JkISEh">>},
{<<"PartitionKey">>,<<"BlanePartitionKey">>},
{<<"SequenceNumber">>,
<<"49545722516689138064543799042897648239478878787235479554">>}]}]}]}
What I am struggling with is how do I write a loop that keeps hitting the kinesis endpoint for that stream until there are no more shard iterators, aka I want all records. Since I can't re-assign the variables as I would in Ruby.
WARNING: My code might be bugged but it's "close". I've never ran it and don't see how last iterator can look like.
I see you are trying to do your job entirely in shell. It's possible but hard. You can use named function and recursion (since release 17.0 it's easier), for example:
F = fun (ShardIteratorPayload) ->
{_, [{_, ShardIterator}]} = kinetic:get_shard_iterator(ShardIteratorPayload),
FunLoop =
fun Loop(<<>>, Accumulator) -> % no clue how last iterator can look like
lists:reverse(Accumulator);
Loop(ShardIterator, Accumulator) ->
{ok, [{_, NextShardIterator}, {<<"Records">>, Records}]} =
kinetic:get_records([{<<"ShardIterator">>, <<ShardIterator/binary>>}]),
Loop(NextShardIterator, [Records | Accumulator])
end,
FunLoop(ShardIterator, [])
end.
AllRecords = F(GetShardIteratorPayload).
But it's too complicated to type in shell...
It's much easier to code it in modules.
A common pattern in erlang is to spawn another process or processes to fetch your data. To keep it simple you can spawn another process by calling spawn or spawn_link but don't bother with links now and use just spawn/3.
Let's compile simple consumer module:
-module(kinetic_simple_consumer).
-export([start/1]).
start(GetShardIteratorPayload) ->
Pid = spawn(kinetic_simple_fetcher, start, [self(), GetShardIteratorPayload]),
consumer_loop(Pid).
consumer_loop(FetcherPid) ->
receive
{FetcherPid, finished} ->
ok;
{FetcherPid, {records, Records}} ->
consume(Records),
consumer_loop(FetcherPid);
UnexpectedMsg ->
io:format("DROPPING:~n~p~n", [UnexpectedMsg]),
consumer_loop(FetcherPid)
end.
consume(Records) ->
io:format("RECEIVED:~n~p~n",[Records]).
And fetcher:
-module(kinetic_simple_fetcher).
-export([start/2]).
start(ConsumerPid, GetShardIteratorPayload) ->
{ok, [ShardIterator]} = kinetic:get_shard_iterator(GetShardIteratorPayload),
fetcher_loop(ConsumerPid, ShardIterator).
fetcher_loop(ConsumerPid, {_, <<>>}) -> % no clue how last iterator can look like
ConsumerPid ! {self(), finished};
fetcher_loop(ConsumerPid, ShardIterator) ->
{ok, [NextShardIterator, {<<"Records">>, Records}]} =
kinetic:get_records(shard_iterator(ShardIterator)),
ConsumerPid ! {self(), {records, Records}},
fetcher_loop(ConsumerPid, NextShardIterator).
shard_iterator({_, ShardIterator}) ->
[{<<"ShardIterator">>, <<ShardIterator/binary>>}].
As you can see both processes can do their job concurrently.
Try from your shell:
kinetic_simple_consumer:start(GetShardIteratorPayload).
Now your see that your shell process turns to consumer and you will have your shell back after fetcher will send {ItsPid, finished}.
Next time instead of
kinetic_simple_consumer:start(GetShardIteratorPayload).
run:
spawn(kinetic_simple_consumer, start, [GetShardIteratorPayload]).
You should play with spawning processes - it's erlang main strength.
In Erlang, you can write loop using tail recursive functions. I don't know the kinetic API, so for simplicity, I just assume, that kinetic:next_iterator/1 return {ok, NextIterator} or {error, Reason} when there are no more shards.
loop({error, Reason}) ->
ok;
loop({ok, Iterator}) ->
do_something_with(Iterator),
Result = kinetic:next_iterator(Iterator),
loop(Result).
You are replacing loop with iteration. First clause deals with case, where there are no more shards left (always start recursion with the end condition). Second clause deals with case, where we got some iterator, we do something with it and call next.
The recursive call is last instruction in the function body, which is called tail recursion. Erlang optimizes such calls - they don't use call stack, so they can run infinitely in constant memory (you will not get anything like "Stack level too deep")
What is the difference in ending a function with end and ok in Erlang? I've been trying to grasp the meaning in the following code:
-module(esOne).
-export([start/1, func/1]).
start(Par) ->
io:format("Client: I am ~p, spawned by the server: ~p~n",[self(),Par]),
spawn(esOne, func, [self()]),
Par ! {onPid, self()},
serverEsOne ! {onName, self()},
receiveMessage(),
ok.
receiveMessage() ->
receive
{reply, N} ->
io:format("Client: I received a message: ~p~n",[N])
after
5000->
io:format("Client: I received no message, i quit~n",[])
end.
func(Parent)->
io:format("Child: I am ~p, spawned from ~p~n",[self(),Parent]).
This code works in conjunction with another .erl file that acts as server. I managed to write this only through analyzing the given server file and copying it's behavior. First I thought ok was used to end every function, but that is not the case as I can't end receiveMessage() with ok. Then I thought I could maybe end every function with end, but start(Par) will give an error if I replace ok by end. Not only that, but in the server file I see that ok and end are used within functions to end loops. The way they're used look the same to me, yet they clearly fulfill a separate function as one cannot be replaced by another. Some clarification would be much appreciated.
Two points to understand:
Some code block types in Erlang are closed with an "end". So if ... end, case ... end, receive ... [after N] ... end and so on. It is certainly possible to use "end" as its own atom in place of OK, but that is not what is happening above.
Every function in Erlang returns some value. If you aren't explicit about it, it returns the value of the last expression. The "=" operator isn't assignment to a variable like in other languages, it is assignment to a symbol as in math, meaning that reassigning is effectively a logical assertion. If the assertion fails, the process throws an exception (meaning it crashes, usually).
When you end something with "ok" or any other atom you are providing a known final value that will be returned. You don't have to do anything with it, but if you want the calling process to assert that the function completed or crash if anything unusual happened then you can:
do_stuff() ->
ok = some_func().
instead of
do_stuff() ->
some_func().
If some_func() may have had a side effect that can fail, it will usually return either ok or {error, Reason} (or something similar). By checking that the return was ok we prevent the calling process from continuing execution if something bad happened. That is central to the Erlang concept of "let it crash". The basic idea is that if you call a function that has a side-effect and it does anything unexpected at all, you should crash immediately, because proceeding with bad data is worse than not proceeding at all. The crash will be cleaned up by the supervisor, and the system will be restored to a known state instead of being in whatever random condition was left after the failure of the side-effect.
A variation on the bit above is to have the "ok" part appear in a tuple if the purpose of a function is to return a value. You can see this in any dict-type handling library, for example. The reason some data returning functions have a return type of {ok, Value} | {error, Reason} instead of just Value | {error, Reason} is to make pattern matching more natural.
Consider the following case clauses:
case dict:find(Key, Dict) of
{ok, Value} ->
Value;
{error, Reason} ->
log(error, Reason),
error
end.
And:
case finder(Key, Struct) of
Value ->
Value;
{error, Reason}
log(error, Reason),
error
end.
In the first example we match the success condition first. In the second version, though, this is impossible because the error clause could never match; any return at all would always be represented by Value. Oops.
Most of the time (but not quite always) functions that return a value or crash will return just the value. This is especially true of pure functions that carry no state but what you pass in and have no side effects (for example, dict:fetch/2 gives the value directly, or crashes the calling process, giving you an easy choice which way you want to do things). Functions that return a value or signal an error usually wrap a valid response in {ok, Value} so it is easy to match.
In most applications, its hard to avoid the need to query large amounts of information which a user wants to browse through. This is what led me to cursors. With mnesia, cursors are implemented using qlc:cursor/1 or qlc:cursor/2. After working with them for a while and facing this problem many times,
11> qlc:next_answers(QC,3).
** exception error: {qlc_cursor_pid_no_longer_exists,<0.59.0>}
in function qlc:next_loop/3 (qlc.erl, line 1359)
12>
It has occured to me that the whole cursor thing has to be within one mnesia transaction: executes as a whole once. like this below
E:\>erl
Eshell V5.9 (abort with ^G)
1> mnesia:start().
ok
2> rd(obj,{key,value}).
obj
3> mnesia:create_table(obj,[{attributes,record_info(fields,obj)}]).
{atomic,ok}
4> Write = fun(Obj) -> mnesia:transaction(fun() -> mnesia:write(Obj) end) end.
#Fun<erl_eval.6.111823515>
5> [Write(#obj{key = N,value = N * 2}) || N <- lists:seq(1,100)],ok.
ok
6> mnesia:transaction(fun() ->
QC = cursor_server:cursor(qlc:q([XX || XX <- mnesia:table(obj)])),
Ans = qlc:next_answers(QC,3),
io:format("\n\tAns: ~p~n",[Ans])
end).
Ans: [{obj,20,40},{obj,21,42},{obj,86,172}]
{atomic,ok}
7>
When you attempt to call say: qlc:next_answers/2 outside a mnesia transaction, you will get an exception. Not only just out of the transaction, but even if that method is executed by a DIFFERENT process than the one which created the cursor, problems are bound to happen.
Another intresting finding is that, as soon as you get out of a mnesia transaction, one of the processes which are involved in a mnesia cursor (apparently mnesia spawns a process in the background), exits, causing the cursor to be invalid. Look at this below:
-module(cursor_server).
-compile(export_all).
cursor(Q)->
case mnesia:is_transaction() of
false ->
F = fun(QH)-> qlc:cursor(QH,[]) end,
mnesia:activity(transaction,F,[Q],mnesia_frag);
true -> qlc:cursor(Q,[])
end.
%% --- End of module -------------------------------------------
Then in shell, i use that method:
7> QC = cursor_server:cursor(qlc:q([XX || XX <- mnesia:table(obj)])).
{qlc_cursor,{<0.59.0>,<0.30.0>}}
8> erlang:is_process_alive(list_to_pid("<0.59.0>")).
false
9> erlang:is_process_alive(list_to_pid("<0.30.0>")).
true
10> self().
<0.30.0>
11> qlc:next_answers(QC,3).
** exception error: {qlc_cursor_pid_no_longer_exists,<0.59.0>}
in function qlc:next_loop/3 (qlc.erl, line 1359)
12>
So, this makes it very Extremely hard to build a web application in which a user needs to browse a particular set of results, group by group say: give him/her the first 20, then next 20 e.t.c. This involves, getting the first results, send them to the web page, then wait for the user to click NEXT then ask qlc:cursor/2 for the next 20 and so on. These operations cannot be done, while hanging inside a mnesia transaction !!! The only possible way, is by spawning a process which will hang there, receiving and sending back next answers as messages and receiving the next_answers requests as messages like this:
-define(CURSOR_TIMEOUT,timer:hours(1)).
%% initial request is made here below
request(PageSize)->
Me = self(),
CursorPid = spawn(?MODULE,cursor_pid,[Me,PageSize]),
receive
{initial_answers,Ans} ->
%% find a way of hidding the Cursor Pid
%% in the page so that the subsequent requests
%% come along with it
{Ans,pid_to_list(CursorPid)}
after ?CURSOR_TIMEOUT -> timedout
end.
cursor_pid(ParentPid,PageSize)->
F = fun(Pid,N)->
QC = cursor_server:cursor(qlc:q([XX || XX <- mnesia:table(obj)])),
Ans = qlc:next_answers(QC,N),
Pid ! {initial_answers,Ans},
receive
{From,{next_answers,Num}} ->
From ! {next_answers,qlc:next_answers(QC,Num)},
%% Problem here ! how to loop back
%% check: Erlang Y-Combinator
delete ->
%% it could have died already, so we be careful here !
try qlc:delete_cursor(QC) of
_ -> ok
catch
_:_ -> ok
end,
exit(normal)
after ?CURSOR_TIMEOUT -> exit(normal)
end
end,
mnesia:activity(transaction,F,[ParentPid,PageSize],mnesia_frag).
next_answers(CursorPid,PageSize)->
list_to_pid(CursorPid) ! {self(),{next_answers,PageSize}},
receive
{next_answers,Ans} ->
{Ans,pid_to_list(CursorPid)}
after ?CURSOR_TIMEOUT -> timedout
end.
That would create a more complex problem of managing process exits, tracking / monitoring e.t.c. I wonder why the mnesia implementers didnot see this !
Now, that brings me to my questions. I have been walking around the web for solutions and you could please check out these links from which the questions arise: mnemosyne, Ulf Wiger's Solution to Cursor Problems, AMNESIA - an RDBMS implementation of mnesia.
1. Does anyone have an idea on how to handle mnesia query cursors in a different way from what is documented, and is worth sharing ?
2. What are the reasons why mnesia implemeters decided to force the cursors within a single transaction: even the calls for the next_answers ?
3. Is there anything, from what i have presented, that i do not understand clearly (other than my bad buggy illustration code - please ignore those) ?
4. AMNESIA (on section 4.7 of the link i gave above), has a good implementation of cursors, because the subsequent calls for the next_answers, do not need to be in the same transaction, NOR by the same process. Would you advise anyone to switch from mnesia to amnesia due to this and also, is this library still supported ?
5. Ulf Wiger , (the author of many erlang libraries esp. GPROC), suggests the use of mnesia:select/4. How would i use it to solve cursor problems in a web application ? NOTE: Please do not advise me to leave mnesia and use something else, because i want to use mnesia for this specific problem. I appreciate your time to read through all this question.
The motivation is that transaction grabs (in your case) read locks.
Locks can not be kept outside of transactions.
If you want, you can run it in a dirty_context, but you loose the
transactional properties, i.e. the table may change between invocations.
make_cursor() ->
QD = qlc:sort(mnesia:table(person, [{traverse, select}])),
mnesia:activity(async_dirty, fun() -> qlc:cursor(QD) end, mnesia_frag).
get_next(Cursor) ->
Get = fun() -> qlc:next_answers(Cursor,5) end,
mnesia:activity(async_dirty, Get, mnesia_frag).
del_cursor(Cursor) ->
qlc:delete_cursor(Cursor).
I think this may help you :
use async_dirty instead of transaction
{Record,Cont}=mnesia:activity(async_dirty, fun mnesia:select/4,[md,[{Match_head,[Guard],[Result]}],Limit,read])
then read next Limit number of records:
mnesia:activity(async_dirty, fun mnesia:select/1,[Cont])
full code:
-record(md,{id,name}).
batch_delete(Id,Limit) ->
Match_head = #md{id='$1',name='$2'},
Guard = {'<','$1',Id},
Result = '$_',
{Record,Cont} = mnesia:activity(async_dirty, fun mnesia:select/4,[md,[{Match_head,[Guard],[Result]}],Limit,read]),
delete_next({Record,Cont}).
delete_next('$end_of_table') ->
over;
delete_next({Record,Cont}) ->
delete(Record),
delete_next(mnesia:activity(async_dirty, fun mnesia:select/1,[Cont])).
delete(Records) ->
io:format("delete(~p)~n",[Records]),
F = fun() ->
[ mnesia:delete_object(O) || O <- Records]
end,
mnesia:transaction(F).
remember you can not use cursor out of one transaction
Can I Nest receive {tcp, Socket, Bin} -> calls? For example I have a top level loop called Loop, which upon receipt of tcp data calls a function, parse_header, to parse header data (an integer which indicates the kind of data to follow and thus its size), after that I need to receive the entire payload before moving on. I might only receive 4 bytes when I need a full 20 bytes and would like to call receive in a separate function called parse_payload. So the call chain would look like loop->parse_header->parse_payload and I would like parse_payload to call receive {tcp, Socket, Bin} ->. I don't know if this ok or if I'm completely going to mess things up and can only do it in the Loop function. Can someone enlighten me? If I am allowed to do this is am I violating some sort of best practice?
Maybe you can check the sample code for "erlang programming".
The download page is Erlang Programming Source Code
In file socket_examples.erl, please check "receive_data" function.
For perse message, I think you should determine how to seperate messages one by one (fixed length or with termination byte), then parse message's header, and payload.
receive_data(Socket, SoFar) ->
receive
{tcp,Socket,Bin} -> %% (3)
receive_data(Socket, [Bin|SoFar]);
{tcp_closed,Socket} -> %% (4)
list_to_binary(reverse(SoFar)) %% (5)
end.
You can also set a gen_tcp socket in passive mode. This way, the owning process won't receive the input by messages but has to fetch it using gen_tcp:recv(Socket, ByteCount) which returns either {ok, Input} or {error, Reason}. As this methods waits infinitely for the bytes you might want to add a timeout using gen_tcp:recv/3. (Erlang documentation of gen_tcp:recv)
While at first glance it might seem the process is now completely unable to react to messages sent to it, there is the following workaround improving the situation a bit:
f1(X) ->
receive
message1 ->
... do something ...,
f1(X);
message2 ->
... do something ...,
f1(X)
after 0 %timeout in ms
{ok, Input} = gen_tcp:recv(Socket, ByteCount, Timeout),
... do something ... % maybe call some times gen_tcp:recv again
f1(X)
end.
If you don't add a timeout to gen_tcp:recv here, other processes could wait ages for f1 to handle their messages.
I have a data source that produces point at a potentially high rate, and I'd like to perform a possibly time-consuming operation on each point; but I would also like the system to degrade gracefully when it becomes overloaded, by dropping excess data points.
As far as I can tell, using a gen_event will never skip events. Conceptually, what I would like the gen_event to do is to drop all but the latest pending events before running the handlers again.
Is there a way to do this with standard OTP ? or is there a good reason why I should not handle things that way ?
So far the best I have is using a gen_server and relying on the timeout to trigger the expensive events:
-behaviour(gen_server).
init() ->
{ok, Pid} = gen_event:start_link(),
{ok, {Pid, none}}.
handle_call({add, H, A},_From,{Pid,Data}) ->
{reply, gen_event:add_handler(Pid,H,A), {Pid,Data}}.
handle_cast(Data,{Pid,_OldData}) ->
{noreply, {Pid,Data,0}}. % set timeout to 0
handle_info(timeout, {Pid,Data}) ->
gen_event:sync_notify(Pid,Data),
{noreply, {Pid,Data}}.
Is this approach correct ? (esp. with respect to supervision ? )
I can't comment on supervision, but I would implement this as a queue with expiring items.
I've implemented something that you can use below.
I made it a gen_server; when you create it you give it a maximum age for old items.
Its interface is that you can send it items to be processed and you can request items that have not been dequeued It records the time at which it receives every item. Every time it receives an item to be processed, it checks all the items in the queue, dequeueing and discarding those that are older than the maximum age. (If you want the maximum age to be always respected, you can filter the queue before you return queued items)
Your data source will cast data ({process_this, Anything}) to the work queue and your (potentially slow) consumers process will call (gimme) to get data.
-module(work_queue).
-behavior(gen_server).
-export([init/1, handle_cast/2, handle_call/3]).
init(DiscardAfter) ->
{ok, {DiscardAfter, queue:new()}}.
handle_cast({process_this, Data}, {DiscardAfter, Queue0}) ->
Instant = now(),
Queue1 = queue:filter(fun({Stamp, _}) -> not too_old(Stamp, Instant, DiscardAfter) end, Queue0),
Queue2 = queue:in({Instant, Data}, Queue1),
{noreply, {DiscardAfter, Queue2}}.
handle_call(gimme, From, State = {DiscardAfter, Queue0}) ->
case queue:is_empty(Queue0) of
true ->
{reply, no_data, State};
false ->
{{value, {_Stamp, Data}}, Queue1} = queue:out(Queue0),
{reply, {data, Data}, {DiscardAfter, Queue1}}
end.
delta({Mega1, Unit1, Micro1}, {Mega2, Unit2, Micro2}) ->
((Mega2 - Mega1) * 1000000 + Unit2 - Unit1) * 1000000 + Micro2 - Micro1.
too_old(Stamp, Instant, DiscardAfter) ->
delta(Stamp, Instant) > DiscardAfter.
Little demo at the REPL:
c(work_queue).
{ok, PidSrv} = gen_server:start(work_queue, 10 * 1000000, []).
gen_server:cast(PidSrv, {process_this, <<"going_to_go_stale">>}),
timer:sleep(11 * 1000),
gen_server:cast(PidSrv, {process_this, <<"going to push out previous">>}),
{gen_server:call(PidSrv, gimme), gen_server:call(PidSrv, gimme)}.
Is there a way to do this with standard OTP ?
No.
is there a good reason why I should not handle things that way ?
No, timing out early can increase the performance of the entire system. Read about how here.
Is this approach correct ? (esp. with respect to supervision ? )
No idea, you haven't provided the supervision code.
As a bit of extra information to your first question:
If you can use 3rd party libraries outside of OTP, there are a few out there that can add preemptive timeouts, which is what you are describing.
There are two that I am familiar with the first is dispcount, and the second is chick (I'm the author of chick, i'll try not to advertise the project here).
Dispcount works really good for single resources that only have a limited number of jobs that can be run at the same time and does no queuing. you can read about it here (warning lots of really interesting information!).
Dispcount didn't work for me because i would have had to spawn 4000+ pools of processes to handle the amount of different queues inside of my app. I wrote chick because I needed a way to dynamically increase and decrease my queue length, as well as being able to queue up requests and deny others, without having to spawn 4000+ pools of processes.
If I were you I would try out discount first (as most solutions do not need chick), and then if you need something a bit more dynamic then a pool that can respond to a certain number of requests try out chick.