I have implemented a simple appmod that handle WebSockets and echo back the messages. But how do I handle an ws.close(); from the JavaScript client? I have tried with the code below, but handle_message({close, Reason}) is never called and ws.onclose = function(evt) {} is never executed on the JavaScript client.
When I use the same JavaScript client code interacting with a node.js websocket, the client receives an onclose event immediately after ws.close();.
Here is the code for my simple appmod:
-module(mywebsocket).
-export([handle_message/1]).
handle_message({text, Message}) ->
{reply, {text, <<Message/binary>>}};
handle_message({close, Reason}) ->
io:format("User closed websocket.~n", []),
{close, normal}.
Updated answer:
As of github commit 16834c, which will eventually be part of Yaws 1.93, Yaws passes a new callback to your WebSockets callback module when the client sends a close message. The callback is:
{close, Status, Reason}
where Status is either the close status sent by the client, or the numerical value 1000 (specified by RFC 6455 for a normal close) if the client didn't include a status value. Reason is a binary holding any optional reason string passed from the client; it will be an empty binary if the client sent no reason.
Your callback handler for a close message MUST return {close, CloseReason} where CloseReason is either the atom normal for a normal close (which results in the status code 1000 being returned to the client) or another legal numerical status code allowed by RFC 6455. Note that CloseReason is unrelated to any Reason value passed by the client. Technically CloseReason can also be any other Erlang term, in which case Yaws returns status 1000 and passes the term to erlang:exit/1 to exit the Erlang process handling the web socket, but based on RFC 6455 we suggest simply returning the atom normal for CloseReason in all cases.
Original answer, obsoleted by Yaws github commit 16834c:
Yaws never passes a {close, Reason} message to your callback module. Rather, {close, Reason} is a valid return value from handle_message/1 should your callback module decide it wants to close the ws socket.
I modified the websockets_example.yaws file shipped with Yaws (version 1.92) to call this._ws.close() in the client if the user enters the "bye" message on the web page, and added an alert to the _onclose function to show that the onclose event is triggered. In this case the alert occurred, I believe because the "bye" message causes the server to close the ws socket explicitly. But I then modified the example to call this._ws.close() in the client no matter what message the user enters, and in that case no alert for onclose occurred. In this case, a check with lsof showed the ws connection from the browser to Yaws was still present.
So, for now I believe you've hit a bug where the Yaws websockets support isn't detecting the client close and closing its end. I'll see if I can fix it.
Related
This seems to cause strange behaviour when PUT has a body larger than a certain length (in my case it is 902 bytes), i.e. ejabberd trims the body (in my case it receives malformed JSON).
Github reference: https://github.com/processone/ejabberd/blob/master/src/ejabberd_http.erl#L403
If I change the case statement to:
case Method of
_AnyMethod ->
case recv_data(State) of
{ok, Data} ->
LQuery = case catch parse_urlencoded(Data) of
{'EXIT', _Reason} -> [];
LQ -> LQ
end,
{State, {LPath, LQuery, Data}};
error ->
{State, false}
end
end
then the body is parsed correctly.
Is this a configuration issue? How can I force Ejabberd to correctly parse the JSON body?
Looks like you've found a bug.
As you've noticed, for POST requests the function recv_data is called, which checks the Content-Length header and reads that many bytes from the socket. For PUT requests however, it only uses Trail, which is the data that has been already received while reading HTTP request headers. (This happens in the receive_headers function, which sends a length of 0 to the recv function, meaning that it won't wait for any specific amount of data.)
How much of the request body is received is going to depend on the size of the headers, as well as the way the client sends the request. If for example the client first sends the headers in one network packet, and then the request body in the next network packet, ejabberd wouldn't pick up the request body at all.
Now that Im using cowboy to serve as a websocket server, I understand that the handler implements a behavior called cowboy_websocket_handler and that the websocket_handle/3 function is called every time we receive a request and that to reply back to the request, we reply using {reply, X, _}. However since WebSocket is a bi-directional protocol and that server can reach to a client without a request, how do I send some data to the client, not in the web_socket_handle.
I am expecting something in the handler along the lines of
send(Client, Data). Am I thinking in the right direction? ? Thanks!
If yes, does cowboy provide some API to do so?
To quote the docs:
Cowboy will call websocket_info/2 whenever an Erlang message arrives.
The handler can handle or ignore the messages. It can also send frames
to the client or stop the connection.
The following snippet forwards log messages to the client and ignores
all others:
websocket_info({log, Text}, State) ->
{reply, {text, Text}, State};
websocket_info(_Info, State) ->
{ok, State}.
So all you have to do is send a message to your handler from another process (or from itself if you wish), and implement websocket_info as above to send a frame to the client.
I am using the TIdHTTP component and it's GET function.
The GET function sends a complete request, which is fine.
However I would like to spare/save some traffic from a GET response and only want to receive the Responsecode which is in the first "line" of a HTTP response.
Is there a possibility of disconnecting the connection in order to save traffic from any further content?
As mentioned, I only need the responsecode from a website.
I alternatively thought about using Indy's TCP component (with SSL IOHandler) and craft an own HTTP Request Header and then receive the responsecode and disconnect on success - but I don't know how to do that.
TIdHTTP has an OnHeadersAvailable event that is intended for this very task. It is triggered after the response headers have been read and before the body content is read, if any. It has a VContinue output parameter that you can set to False to cancel any further reading.
Update: Something I just discovered: When setting VContinue=False in the OnHeadersAvailable event, TIdHTTP will set Response.KeepAlive=False and skip reading the response body (OK so far), but after the response is done being processed, TIdHTTP checks the KeepAlive value, and the property getter returns True if the socket hasn't been closed on the server's end (HTTP 1.1 uses keep-alives by default). This causes TIdHTTP to not close its end of the socket, and will leave any response body unread. If you then re-use the same TIdHTTP object for a new HTTP request, it will end up processing any unread body data from the previous response before it sees thee response headers of the new request.
You can work around this issue by setting the Request.Connection property to 'close' before calling TIdHTTP.Get(). That tells the server to close its end of the socket connection after sending the response (although, I just found that when requesting an HTTPS url, especially after an HTTP request directs to HTTPS, TIdHTTP clears the Request.Connection value!). Or, simply call TIdHTTP.Disconnect() after TIdHTTP.Get() exits.
I have now updated TIdHTTP to:
no longer clear the Request.Connection when preparing an HTTPS request.
close its end of the socket connection if either:
OnHeadersAvailable returns VContinue=False
the Request.Connection property (or, if connected to a proxy, the Request.ProxyConnection property) has been set to 'close', regardless of the server's response.
Usually you would use TIdHttp.Head, because HEAD requests are intended for doing just that.
If the server does not accept HEAD requests like in OP's case, you can assign the OnWorkBegin event of your TIdHttp instance, and call TIdHttp(Sender).Disconnect; there. This immediately closes the connection, the download does not continue, but you still have the meta data like response code, content length etc.
I have a cowboy websocket server. Many clients send message over the websocket. I need to do processing on the message. I can do that in websocket_handle, However as it's realtime I would like to avoid it instead I want to send the message to a Global Process where all the processing can be done.
As Each cowboy has it's own process How to run a process where every user can send message and processing can be done in that process.
Just to clarify, each websocket connection will have its own erlang process in Cowboy, so messages from different websocket clients will be processed in different processes.
If you need to move the processing from the websocket you can simply start a new handler/server process when your app starts (e.g. when you start Cowboy) that listens for process commands and data. Sample processing code:
-module(my_processor).
-export([start/0]).
start() ->
spawn(fun process_loop/0).
process_loop() ->
receive
{process_cmd, Data} ->
process(Data)
end,
process_loop().
When you start it, also register the process with a global name. That way we can reference it from the websocket handlers later.
Pid=my_processor:start().
register(processor, Pid).
Now you can send the data from Cowboy's websocket_handle/3 function to the handling process:
websocket_handle(Data, Req, State) ->
...,
processor ! {process_cmd, Data},
...,
{ok,Req,State}.
Note that the my_processor process will handle the processing requests from all connections. If you want to have a separate process for each websocket connection you could start my_processor in Cowboy's websocket_init/3 function, store the Pid of the my_processorprocess in the State parameter returned from websocket_init and use that pid instead of the processor global name.
Hi I am a new bee in Erlang but managed to create a simple TCP server which accepts client in passive mode and displays message.
I spawn a new process every time new client connects to the server. Is there a way I could send message to the client using the process which gets spawned when client connects.
Here is code.
-module(test).
-export([startserver/0]).
startserver()->
{ok, ListenSocket}=gen_tcp:listen(1235,[binary,{active, false}]),
connect(ListenSocket).
connect(ListenSocket)->
{ok, UserSocket}=gen_tcp:accept(ListenSocket),
Pid=spawn(? MODULE, user,[UserSocket]),
gen_tcp:controlling_process(UserSocket, Pid),
connect(ListenSocket).
user(UserSocket)->
case gen_tcp:recv(UserSocket, 0) of.
{ok, Binary}->% Send basic message.
{error, closed}->% operation on close.
end.
Can I have some thing like if I do.
Pid!{"Some Message"}. And the message is send to the socket associated with the process with non blocking io,
You could try this tutorial for writing a TCP server using OTP principles: http://learnyousomeerlang.com/buckets-of-sockets#sockserv-revisited
If you use a gen_server instead of your connect loop, you can store the Pids in the state. Then you can use gen_server:cast/2 to send a message to one of the Pids.
The function you want to send a message to the client from the controlling process is gen_tcp:send(Socket, Message), so for example if you wanted to send a one off message on connection you could do this:
user(UserSocket)->
gen_tcp:send(UserSocket, "hello"),
case gen_tcp:recv(UserSocket, 0) of
{ok, Binary}->% Send basic message.
gen_tcp:send(UserSocket, "basic message"),
{error, closed}->% operation on close.
gen_tcp:send(UserSocket, "this socket is closing now"),
end.