A basic RabbitMQ install with user guest/guest.
Given the following simple erlang test code for RabbitMQ (erlang client), I am getting the error bellow. The queue TEST_DIRECT_QUEUE exists and has 7 messages in it, and the RabbitMQ server is up and running.
If I try to create a new with a declare API command, I also get a similar error.
Overall the error appears during any the << channel:call >> command
Any thoughts ? Thanks.
=ERROR REPORT==== 16-Feb-2013::10:39:42 ===
Connection (<0.38.0>) closing: internal error in channel (<0.50.0>): shutdown
** exception exit: {shutdown,{gen_server,call,
[<0.50.0>,
{call,{'queue.declare',0,"TEST_DIRECT_QUEUE",false,false,
false,false,false,[]},
none,<0.31.0>},
infinity]}}
in function gen_server:call/3 (gen_server.erl, line 188)
in call from test:test_message/0 (test.erl, line 12)
==============================================
-module(test).
-export([test_message/0]).
-include_lib("amqp_client/include/amqp_client.hrl").
-record(state, {channel}).
test_message() ->
{ok, Connection} = amqp_connection:start(#amqp_params_network{}),
{ok, Channel} = amqp_connection:open_channel(Connection),
Get = #'basic.get'{queue = "TEST_DIRECT_QUEUE"},
{#'basic.get_ok'{}, Content} = amqp_channel:call(Channel, Get), <=== error here
#'basic.get_empty'{} = amqp_channel:call(Channel, Get),
amqp_channel:call(Channel, #'channel.close'{}).
I have identified the issue myself after some frustrating hours. Overall, let me confess to be upset with the vague tutorias and documentation about RabbitMQ.... Anyways, here is what the problem was:
1) Queue Names are supposed to be in binary form, therefore preceded by "<<" and superceded by ">>". For example : <<"my queue name">> ( quotes included as well )
2) In a different scenario where I was trying to create the queue with queue.declare, the fact that the queue already existed was not a problem, but the fact that the queue was durable and the queue.declare did not specify that set of parameters caused the program to throw an error and interrupt execution. This is an unfortunate behavior where normally, developers would expect the queue matching to be done simply by name and then proceed. So to fix that I had to specify the durable value.
Here is a simple working code:
-module(test).
-export([test/0]).
-include_lib("amqp_client/include/amqp_client.hrl").
test() ->
{ok, Connection} = amqp_connection:start(#amqp_params_network{}),
{ok, Channel} = amqp_connection:open_channel(Connection),
Declare = #'queue.declare'{queue = <<"TEST_DIRECT_QUEUE">>, durable = true},
#'queue.declare_ok'{} = amqp_channel:call(Channel, Declare),
Get = #'basic.get'{queue = <<"TEST_DIRECT_QUEUE">>, no_ack = true},
{#'basic.get_ok'{}, Content} = amqp_channel:call(Channel, Get),
#amqp_msg{payload = Payload} = Content.
Related
When I read Erlang OTP Action book, I found this reminder on page 117:
With your RPC server, you can try calling any function exported from any module available on the server side, except one: your own tr_server:get_count/0. In general, a server can’t call its own API functions. Suppose you make a synchronous call to the same server from within one of the callback functions: for example, if handle_info/2 tries to use the get_count/0 API function. It will then perform a gen_server:call(...) to itself. But that request will be queued up until after the current call to handle_info/2 has finished, resulting in a circular wait—the server is deadlocked.
But I looked at the tr_server sample code :
get_count() ->
gen_server:call(?SERVER, get_count).
stop() ->
gen_server:cast(?SERVER, stop).
handle_info({tcp, Socket, RawData}, State) ->
do_rpc(Socket, RawData),
RequestCount = State#state.request_count,
{noreply, State#state{request_count = RequestCount + 1}};
......
do_rpc(Socket, RawData) ->
try
{M, F, A} = split_out_mfa(RawData),
Result = apply(M, F, A), % tr_server process -> handle_info -> do_rpc ->call & cast
gen_tcp:send(Socket, io_lib:fwrite("~p~n", [Result]))
catch
_Class:Err ->
gen_tcp:send(Socket, io_lib:fwrite("~p~n", [Err]))
end.
I found the examples and cautions in the book inconsistent , the gen_server:call and gen_server:cast by tr_server process ownself.
Am I misinterpreting this?
Calling gen_server:cast from within the server process is fine, because it is asynchronous: it adds the message to the mailbox of the process and then continues, returning ok. Only gen_server:call has this problem, because it makes the process wait for an answer from itself.
Since, ejabberd no longer supports the iqdisc option. I am trying to create my own custom queue to handle all the iq-packets with a custom namespace and return the result back to the client.
I am able to spawn my own process and run the functions and obtain the result. But the process fails in the end with the following error:
09:07:42.850 [error] Failed to route packet:
#iq{id = <<"5b660017-c52c-4fe8-8a56-213c1a86bc12-3">>,type = get,
lang = <<"en">>,
from =
#jid{
user = <<"check">>,server = <<"localhost">>,
resource = <<"64195176449188261912514">>,luser = <<"check">>,
lserver = <<"localhost">>,
lresource = <<"64195176449188261912514">>},
to =
#jid{
user = <<>>,server = <<"localhost">>,resource = <<>>,luser = <<>>,
lserver = <<"localhost">>,lresource = <<>>},
sub_els =
[#xmlel{
name = <<"....
......
.......
exception error: no try clause matching ok
in function gen_iq_handler:process_iq/4 (src/gen_iq_handler.erl, line 110)
in call from ejabberd_router:do_route/1 (src/ejabberd_router.erl, line 399)
in call from ejabberd_router:route/1 (src/ejabberd_router.erl, line 92)
in call from ejabberd_c2s:check_privacy_then_route/2 (src/ejabberd_c2s.erl, line 842)
in call from xmpp_stream_in:process_authenticated_packet/2 (src/xmpp_stream_in.erl, line 697)
in call from xmpp_stream_in:handle_info/2 (src/xmpp_stream_in.erl, line 392)
in call from p1_server:handle_msg/8 (src/p1_server.erl, line 696)
in call from proc_lib:init_p_do_apply/3 (proc_lib.erl, line 249)
The packet shown above is from the client to server: which looks fine to me.
I am using the latest ejabberd version: 19.09.1
I guess, I am having trouble routing the iq-packet response back to the client? Am I missing something else here? The packet seems fine to me. What is the ideal way to dedicate a queue to process all the iq-packets to a custom namespace and then return the responses back to the client, without blocking the main queue in the meantime, so that ejabberd can still handle other incoming packets.
Any ideas/pointers would be really appreciated. Thanks!
I figured this out.
The spawned process was not returning the desired result and hence the request failed.
I am still trying to figure out how to process multiple packets (target to different namespaces) at once on ejabberd, so that I can run all packets to my custom namespace to run on a single queue.
I am implementing the Gossip Algorithm in which multiple actors spread a gossip at the same time in parallel. The system stops when each of the Actor has listened to the Gossip for 10 times.
Now, I have a scenario in which I am checking the listen count of the recipient actor before sending the gossip to it. If the listen count is already 10, then gossip will not be sent to the recipient actor. I am doing this using synchronous call to get the listen count.
def get_message(server, msg) do
GenServer.call(server, {:get_message, msg})
end
def handle_call({:get_message, msg}, _from, state) do
listen_count = hd(state)
{:reply, listen_count, state}
end
The program runs well in the starting but after some time the Genserver.call stops with a timeout error like following. After some debugging, I realized that the Genserver.call becomes dormant and couldn't initiate corresponding handle_call method. Is this behavior expected while using synchronous calls? Since all actors are independent, shouldn't the Genserver.call methods be running independently without waiting for each others response.
02:28:05.634 [error] GenServer #PID<0.81.0> terminating
** (stop) exited in: GenServer.call(#PID<0.79.0>, {:get_message, []}, 5000)
** (EXIT) time out
(elixir) lib/gen_server.ex:774: GenServer.call/3
Edit: The following code can reproduce the error when running in iex shell.
defmodule RumourActor do
use GenServer
def start_link(opts) do
{:ok, pid} = GenServer.start_link(__MODULE__,opts)
{pid}
end
def set_message(server, msg, recipient) do
GenServer.cast(server, {:set_message, msg, server, recipient})
end
def get_message(server, msg) do
GenServer.call(server, :get_message)
end
def init(opts) do
state=opts
{:ok,state}
end
def handle_cast({:set_message, msg, server, recipient},state) do
:timer.sleep(5000)
c = RumourActor.get_message(recipient, [])
IO.inspect c
{:noreply,state}
end
def handle_call(:get_message, _from, state) do
count = tl(state)
{:reply, count, state}
end
end
Open iex shell and load above module. Start two processes using:
a = RumourActor.start_link(["", 3])
b = RumourActor.start_link(["", 5])
Produce error by calling a deadlock condition as mentioned by Dogbert in comments. Run following without much time difference.
cb = RumourActor.set_message(elem(a,0), [], elem(b,0))
ca = RumourActor.set_message(elem(b,0), [], elem(a,0))
Wait for 5 seconds. Error will appear.
A gossip protocol is a way of dealing with asynchronous, unknown, unconfigured (random) networks that may be suffering intermittent outages and partitions and where no leader or default structure is present. (Note that this situation is somewhat unusual in the real world and out-of-band control is always imposed on systems in some way.)
With that in mind, let's change this to be an asynchronous system (using cast) so that we are following the spirit of the concept of chatty gossip style communication.
We need digest of messages that counts how many times a given message has been received, a digest of messages that have been received and are already over the magic number (so we don't re-send one if it is way late), and a list of processes enrolled in our system so we know to whom we are broadcasting:
(The following example is in Erlang because I just trip over Elixir syntax ever since I stopped using it...)
-module(rumor).
-record(s,
{peers = [] :: [pid()],
digest = #{} :: #{message_id(), non_neg_integer()},
dead = sets:new() :: sets:set(message_id())}).
-type message_id() :: zuuid:uuid().
Here I am using a UUID, but it could be whatever. An Erlang reference would be fine for a test case, but since gossip isn't useful within an Erlang cluster, and references are unsafe outside the originating system I'm just jumping to the assumption this is for a networked system.
We will need an interface function that allows us to tell a process to inject a new message into the system. We will also need an interface function that sends a message between two processes once it is already in the system. Then we will need an inner function that broadcasts messages to all the known (subscribed) peers. Ah, that means we need a greeting interface so that peer processes can notify each other of their presence.
We will also want a way to have a process tell itself to keep broadcasting over time. How long to set the interval on retransmission is not actually a simple decision -- it has everything to do with network topology, latency, variability, etc (you would actually probably occasionally ping peers and develop some heuristic based on the latency, drop peers that seem unresponsive, and so on -- but we're not going to get into that madness here). Here I'm just going to set it for 1 second because that is an easy to interpret interval for humans observing the system.
Note that everything below is asynchronous.
Interfaces...
insert(Pid, Message) ->
gen_server:cast(Pid, {insert, Message}).
relay(Pid, ID, Message) ->
gen_server:cast(Pid, {relay, ID, Message}).
greet(Pid) ->
gen_server:cast(Pid, {greet, self()}).
make_introduction(Pid, PeerPid) ->
gen_server:cast(Pid, {make_introduction, PeerPid}).
That last function is going to be our way as testers of the system to cause one of the processes to call greet/1 on some target Pid so they start to build a peer network. In the real world something slightly different usually goes on.
Inside our gen_server callback for receiving a cast we will get:
handle_cast({insert, Message}, State) ->
NewState = do_insert(Message, State);
{noreply, NewState};
handle_cast({relay, ID, Message}, State) ->
NewState = do_relay(ID, Message, State),
{noreply, NewState};
handle_cast({greet, Peer}, State) ->
NewState = do_greet(Peer, State),
{noreply, NewState};
handle_cast({make_introduction, Peer}, State) ->
NewState = do_make_introduction(Peer, State),
{noreply, NewState}.
Pretty simple stuff.
Above I mentioned that we would need a way for this thing to tell itself to resend after a delay. To do that we are going to send ourselves a naked message to "redo_relay" after a delay using erlang:send_after/3 so we are going to need a handle_info/2 to deal with it:
handle_info({redo_relay, ID, Message}, State) ->
NewState = do_relay(ID, Message, State),
{noreply, NewState}.
Implementation of the message bits is the fun part, but none of this is terribly tricky. Forgive the do_relay/3 below -- it could be more concise, but I'm writing this in a browser off the top of my head, so...
do_insert(Message, State = #s{peers = Peers, digest = Digest}) ->
MessageID = zuuid:v1(),
NewDigest = maps:put(MessageID, 1, Digest),
ok = broadcast(Message, Peers),
ok = schedule_resend(MessageID, Message),
State#s{digest = NewDigest}.
do_relay(ID,
Message,
State = #s{peers = Peers, digest = Digest, dead = Dead}) ->
case maps:find(ID, Digest) of
{ok, Count} when Count >= 10 ->
NewDigest = maps:remove(ID, Digest),
NewDead = sets:add_element(ID, Dead),
ok = broadcast(Message, Peers),
State#s{digest = NewDigest, dead = NewDead};
{ok, Count} ->
NewDigest = maps:put(ID, Count + 1),
ok = broadcast(ID, Message, Peers),
ok = schedule_resend(ID, Message),
State#s{digest = NewDigest};
error ->
case set:is_element(ID, Dead) of
true ->
State;
false ->
NewDigest = maps:put(ID, 1),
ok = broadcast(Message, Peers),
ok = schedule_resend(ID, Message),
State#s{digest = NewDigest}
end
end.
broadcast(ID, Message, Peers) ->
Forward = fun(P) -> relay(P, ID, Message),
lists:foreach(Forward, Peers).
schedule_resend(ID, Message) ->
_ = erlang:send_after(1000, self(), {redo_relay, ID, Message}),
ok.
And now we need the social bits...
do_greet(Peer, State = #s{peers = Peers}) ->
case lists:member(Peer, Peers) of
false -> State#s{peers = [Peer | Peers]};
true -> State
end.
do_make_introduction(Peer, State = #s{peers = Peers}) ->
ok = greet(Peer),
do_greet(Peer, State).
So what did all of the horribly untypespecced stuff up there do?
It avoided any possibility of a deadlock. The reason deadlocks are so, well, deadly in peer systems is that anytime you have two identical processes (or actors, or whatever) communicating synchronously, you have created a textbook case of a potential deadlock.
Any time A has a synchronous message headed toward B and B has a synchronous message headed toward A at the same time you now have a deadlock. There is no way to create to identical processes that call each other synchronously without creating a potential deadlock. In massively concurrent systems anything that might happen almost certainly will eventually, so you're going to run into this sooner or later.
Gossip is intended to be asynchronous for a reason: it is a sloppy, unreliable, inefficient way to deal with a sloppy, unreliable, inefficient network topology. Trying to make calls instead of casts not only defeats the purpose of gossip-style message relay, it also pushes you into impossible deadlock territory incident to changing the nature of the protocol from asynch to synch.
Genser.call has a default timeout of 5000 milliseconds. So what probably happening is, the message queue of the actor is filled with millions of messages and by the time it reaches to call, the calling actor has timed out.
You can handle timeout using a try...catch:
try do
c = RumourActor.get_message(recipient, [])
catch
:exit, reason ->
# handle timeout
Now, the called actor will finally get to the call message and respond, which will come as an unexpected message to the first process. This you'll need to handle using handle_info. So one way is to ignore the error in catch block and send it rumor from handle_info.
Also, this will significantly degrade the performance if there are many process waiting to be timed-out for 5 seconds before moving ahead. One could deliberately reduce the timeout and handle the reply in handle_info. This will reduce to using cast and handling reply from other process.
Your blocking call need to be broken into two non blocking calls. So if A is making a blocking call to B, instead of waiting for reply, A can ask B to send its state on a given address (A's address) and move on.
Then A will handle that message separately and reply if necessary.
A.fun1():
body of A before blocking call
result = blockingcall()
do things based on result
needs to be divided into:
A.send():
body of A before blocking call
nonblockingcall(A.receive) #A.receive is where B should send results
do other things
A.receive(result):
do things based on result
It's my first attempt to write anything in Erlang, so maybe the question is silly.
I'm writing a quite simple HTTP server using cowboy
db_name() -> "DB_test".
timestamp() ->
calendar:datetime_to_gregorian_seconds(calendar:universal_time()).
sha(Str) ->
<<X:256/big-unsigned-integer>> = crypto:hash(sha256, Str),
lists:flatten(io_lib:format("~64.16.0b", [X])).
handle_post(Req0, State) ->
Link = binary_to_list(cowboy_req:header(<<"link">>, Req0)),
dets:open_file(db_name(), []),
dets:insert(db_name(), {hashed_url(Link), Link, timestamp()}),
Req = cowboy_req:reply(200,
#{<<"content-type">> => <<"text/plain">>},
sha(Link),
Req0),
{ok, Req, State}.
The idea is that a POST HTTP request contains a 'link' header with some link. After recieving such request my server should store it's hash in dets table along with the link and its timestamp. The problem is that the "DB_test" file isn't created. Why?
Based on your example code, it's hard to say exactly why, since you're ignoring the return values from both dets:open_file/2 and dets:insert/2.
Both of them return different values for the success and failure cases; but do not throw exceptions.
See the official documentation for more details: http://erlang.org/doc/man/dets.html
The simplest solution to this is to crash the cowboy connection handling process in non-success cases. You can do that by doing something like the following:
{ok, Ref} = dets:open_file(db_name(), []),
ok = dets:insert(Ref, {hashed_url(Link), Link, timestamp()}),
This will crash with a badmatch exception in the failure cases, when the value returned cannot be pattern matched to the left-hand side of the statement, subsequently causing cowboy to return HTTP 500 to the client.
You'll then see details on what the actual error was in the stacktrace logged
A second solution would be to explicitly handle the failure cases, you can use the 'case' keyword for that.
An example would be something like:
case dets:open_file(db_name(), []) of
{ok, Ref} ->
do_success_things();
{error, Reason}=E ->
io:format("Unable to open database file: ~p~n", [E]),
do_failure_things();
end
For further reading, I'd highly recommend the Syntax in functions and Errors and exceptions chapters of Learn you some Erlang for great good: http://learnyousomeerlang.com/
I am trying to update my process's state on a 10 second timer.
-define(INTERVAL, 3000).
start_link() ->
gen_server:start_link(?MODULE, [], []).
action(Pid, Action) ->
gen_server:call(Pid, Action).
init([]) ->
erlang:send_after(?INTERVAL, self(), trigger),
{ok, temple:new()}.
what I want to do is call this
handle_call({fight}, _From, Temple) ->
NewTemple = temple:fight(Temple),
{reply, NewTemple, NewTemple};
So I try
handle_info(trigger, _State) ->
land:action(self(), {fight}),
erlang:send_after(?INTERVAL, self(), trigger);
but I get
=ERROR REPORT==== 4-Dec-2016::19:00:35 ===
** Generic server <0.400.0> terminating
** Last message in was trigger
** When Server state == {{dict,0,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[]},
{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[]}}},
[]}
** Reason for termination ==
** {function_clause,[{land,terminate,
[{timeout,{gen_server,call,[<0.400.0>,{fight}]}},
{{dict,0,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[]},
{{[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[]}}},
[]}],
[{file,"src/land.erl"},{line,47}]}
It appears that with land:action(self(), {fight}), you're attempting to make a call to the same gen_server in which you're currently handling the trigger message. Two important facts will explain why this can't work:
A call always waits for the result to be returned.
A gen_server is a process, and a process can handle only one message at a time.
In handling the trigger message, you're saying to call back to yourself and wait for yourself to process the {fight} message. Since you're in the middle of handling the trigger message, though, you'll never get to the {fight} message. You're effectively in a deadlock with yourself. That's why you're getting a timeout.
P.S. Posting an SSCCE is far more likely to get you good answers.
The error message means that there is no terminate clause in the land server module, you should have a warning when you compile the land server module.
The terminate clause is called because a timeout occurs during the gen_server call with the parameters (Pid = <0.400.0>, Message = {fight}), which was called on line land:action(self(), {fight}),. A call to a gen_server must be completed within a maximum time, by default 5000ms, You have to reduce the time spent in the fight action.
Note that it is not a good idea to increase the server timeout since a gen_server call is blocking: during the execution of a gen_server call no new message can be handled, and in your example , it is also blocking the execution of the handle_info(trigger, _State) code.
Last point, the clause handle_info(trigger, _State) should return a tuple of the form {noreply,NewState} while the last line erlang:send_after(?INTERVAL, self(), trigger); returns a timer reference, you have to modify this line.