Cowboy websocket termination error - erlang

I am implementing cowboy websocket. Everything is working fine except when user closes the browser it fires websocket_termination and at server end it generates following error:-
Error in process <0.298.0> on node 'ews_2#servername.com' with exit value: {function_clause,
[{cowboy_req,ensure_response,[{[]},204],[{file,"src/cowboy_req.erl"},{line,1112}]},
{cowboy_protocol,next_request,3,[{file,"src/cowboy_protocol.erl"},{line,545}]}]}
Code in websocket_termination is :-
websocket_terminate(Reason, Req, State) ->
io:format("~nWebsocket connection termination~n"),
ok.

Resolved: Problem was Req was not getting passed and got manipulated between the callbacks... Cowboy needs a proper Req parameter to be passed at the time of connection termination.

Related

Flickering HttpClient sometimes throwing IOException

I'm using java.net.http.HttpClient.newHttpClient() under Java 19 (Temurin) and perform sendAsync(...) requests from different treads on the same instance. I assume this is ok, as the javadoc states:
Once built, an HttpClient is immutable...
However, some requests fail with:
java.io.IOException: HTTP/1.1 header parser received no bytes
The weird thing is, it depends on the speed of my requests:
Requests every 5 seconds: 30% failure
Requests every 3 seconds: 0% failure
I've written a test for it:
private final HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://..."))
.setHeader("Content-Type", "application/json")
.POST(HttpRequest.BodyPublishers.ofByteArray("[]".getBytes()))
.build();
#ParameterizedTest
#ValueSource(ints = {3, 5})
void httpClientTest(int intervalSeconds) throws Exception {
HttpClient httpClient = HttpClient.newHttpClient();
httpClient.sendAsync(request, HttpResponse.BodyHandlers.ofByteArray()).get();
Thread.sleep(Duration.ofSeconds(intervalSeconds));
httpClient.sendAsync(request, HttpResponse.BodyHandlers.ofByteArray()).get();
Thread.sleep(Duration.ofSeconds(intervalSeconds));
httpClient.sendAsync(request, HttpResponse.BodyHandlers.ofByteArray()).get();
Thread.sleep(Duration.ofSeconds(intervalSeconds));
httpClient.sendAsync(request, HttpResponse.BodyHandlers.ofByteArray()).get();
Thread.sleep(Duration.ofSeconds(intervalSeconds));
httpClient.sendAsync(request, HttpResponse.BodyHandlers.ofByteArray()).get();
}
I've already tried the following:
Doing the same with curl on the command line. No requests fail whatever interval I try. So it's probably not a problem with the server.
Running the tests multiple times in parallel. Still the 5-second-intervals fail (then multiple times in parallel). So it's probably not a problem with the server.
Creating an HttpClient.newHttpClient() for every request. No requests fail whatever interval. So it's probably not a problem with the server but with an internal state of the HttpClient (although it claims to be immutable?).
Do you have an idea what I could do, without needing to create a new HttpClient for every request?
Here is the answer for the record: the java.net.HttpClient has a long default HTTP/1.1 keepAlive time, which is longer than what usual servers are configured with. This often results in the server closing idle HTTP/1.1 connections before the client does. If the server closes the connection at about the same time than the client tries to reuse it, some IOException might get raised.
If such exceptions are observed too frequently applications should consider adapting the default keepAlive time in the client to some value shorter than what the servers it connects to are using.
A default value for the HttpClient HTTP/1.1 keepAlive time can be specified on the command line with: -Djdk.httpclient.keepalive.timeout=duration-in-seconds
So for instance - if a server is configured with a keepAlive time of 5s, you could consider supplying -Djdk.httpclient.keepalive.timeout=3 or -Djdk.httpclient.keepalive.timeout=4 on the client's java command line.

spring amqp rabbit max consumer connection retries

I am trying to establish the max number of retries from my app to rabbit broker.
I have the retry interceptor,
#Bean
public RetryOperationsInterceptor retryOperationsInterceptor() {
return RetryInterceptorBuilder.stateless()
.maxAttempts(CommonConstants.MAX_AMQP_RETRIES)
.backOffOptions(500, 2.0, 3000)
.build();
}
and this is used inside listener container,
container.setAdviceChain(new Advice[]{retryOperationsInterceptor()});
However, after a couple of retries, the consumer attempts connection all over again in an endless loop,
2017-02-21 15:03:12.229 WARN 9292 --- [nsumerThread_92] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused: connect
2017-02-21 15:03:12.229 INFO 9292 --- [nsumerThread_92] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer: tags=[{}], channel=null, acknowledgeMode=AUTO local queue size=0
2017-02-21 15:03:13.245 WARN 9292 --- [nsumerThread_93] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused: connect
2017-02-21 15:03:13.245 INFO 9292 --- [nsumerThread_93] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer: tags=[{}], channel=null, acknowledgeMode=AUTO local queue size=0
2017-02-21 15:03:13.261 ERROR 9292 --- [nsumerThread_83] o.s.a.r.l.SimpleMessageListenerContainer : Failed to check/redeclare auto-delete queue(s).
I want the app to fail and error out because of lack of connectivity to the broker after a MAX_RETRY # limit.
thanks for the help
EDIT
As suggested by #artem-bilan, I ended up using a Component
public class BrokerFailureEventListener implements ApplicationListener<ListenerContainerConsumerFailedEvent>
In this class the onApplicationEvent I counted the number of failures and then take appropriate action.
In case of producer-side, it's a little different scenario. As explained by #artem-bilan, the application would need to take care of any issues. I explored using netflix-hystrix and added a fallback method for the production method and will go with that route. thanks much again.
Well, you misunderstood a bit container.setAdviceChain(new Advice[]{retryOperationsInterceptor()});. It is for the business errors during messages processing:
Business exception handling, as opposed to protocol errors and dropped connections, might need more thought and some custom configuration, especially if transactions and/or container acks are in use. Prior to 2.8.x, RabbitMQ had no definition of dead letter behaviour, so by default a message that is rejected or rolled back because of a business exception can be redelivered ad infinitum. To put a limit in the client on the number of re-deliveries, one choice is a StatefulRetryOperationsInterceptor in the advice chain of the listener. The interceptor can have a recovery callback that implements a custom dead letter action: whatever is appropriate for your particular environment.
In contradiction to the:
In fact it loops endlessly trying to restart the consumer, and only if the consumer is very badly behaved indeed will it give up. One side effect is that if the broker is down when the container starts, it will just keep trying until a connection can be established.
What you need is ListenerContainerConsumerFailedEvent, which is emitted as:
private void logConsumerException(Throwable t) {
if (logger.isDebugEnabled()
|| !(t instanceof AmqpConnectException || t instanceof ConsumerCancelledException)) {
logger.warn(
"Consumer raised exception, processing can restart if the connection factory supports it",
t);
}
else {
logger.warn("Consumer raised exception, processing can restart if the connection factory supports it. "
+ "Exception summary: " + t);
}
publishConsumerFailedEvent("Consumer raised exception, attempting restart", false, t);
}
So, you can listen for those events and stop your application when some condition is reached.

Quectel M95 HTTP POST timeout error

I'm trying to send a POST to a server but I always get the +CME ERROR: 3821. I know that this means "HTTP to read timeout". Then, I tried to change the server to another one, just to test, and then I get the same error 3821. My AT commands list is:
AT+CGATT=1
AT+QIFGCNT=0
AT+QICSGP=1,"zap.vivo.com.br"
AT+QIACT
AT+QILOCIP (IP OK!)
AT+QHTTPURL=38,30
CONNECT
http://www.posttestserver.com/post.php
OK
AT+QHTTPPOST=10,50,80
CONNECT
helloworld
OK
+CME ERROR: 3821
Does anyone know what is wrong?
I solved it by using directly
AT+QHTTPPOST=10,50
and not
AT+QHTTPPOST=10,50,10
hello even the issue is one year old i am writing answer if any one needs. In the source file of "ril_http.c" of Quectel modules, add delay of minimum 10ms in HTTP call back handler. it will solve the timeout error and will able to post it successfully.

Cowboy websocket Global processing

I have a cowboy websocket server. Many clients send message over the websocket. I need to do processing on the message. I can do that in websocket_handle, However as it's realtime I would like to avoid it instead I want to send the message to a Global Process where all the processing can be done.
As Each cowboy has it's own process How to run a process where every user can send message and processing can be done in that process.
Just to clarify, each websocket connection will have its own erlang process in Cowboy, so messages from different websocket clients will be processed in different processes.
If you need to move the processing from the websocket you can simply start a new handler/server process when your app starts (e.g. when you start Cowboy) that listens for process commands and data. Sample processing code:
-module(my_processor).
-export([start/0]).
start() ->
spawn(fun process_loop/0).
process_loop() ->
receive
{process_cmd, Data} ->
process(Data)
end,
process_loop().
When you start it, also register the process with a global name. That way we can reference it from the websocket handlers later.
Pid=my_processor:start().
register(processor, Pid).
Now you can send the data from Cowboy's websocket_handle/3 function to the handling process:
websocket_handle(Data, Req, State) ->
...,
processor ! {process_cmd, Data},
...,
{ok,Req,State}.
Note that the my_processor process will handle the processing requests from all connections. If you want to have a separate process for each websocket connection you could start my_processor in Cowboy's websocket_init/3 function, store the Pid of the my_processorprocess in the State parameter returned from websocket_init and use that pid instead of the processor global name.

How do I handle a WebSocket close from the client in Yaws?

I have implemented a simple appmod that handle WebSockets and echo back the messages. But how do I handle an ws.close(); from the JavaScript client? I have tried with the code below, but handle_message({close, Reason}) is never called and ws.onclose = function(evt) {} is never executed on the JavaScript client.
When I use the same JavaScript client code interacting with a node.js websocket, the client receives an onclose event immediately after ws.close();.
Here is the code for my simple appmod:
-module(mywebsocket).
-export([handle_message/1]).
handle_message({text, Message}) ->
{reply, {text, <<Message/binary>>}};
handle_message({close, Reason}) ->
io:format("User closed websocket.~n", []),
{close, normal}.
Updated answer:
As of github commit 16834c, which will eventually be part of Yaws 1.93, Yaws passes a new callback to your WebSockets callback module when the client sends a close message. The callback is:
{close, Status, Reason}
where Status is either the close status sent by the client, or the numerical value 1000 (specified by RFC 6455 for a normal close) if the client didn't include a status value. Reason is a binary holding any optional reason string passed from the client; it will be an empty binary if the client sent no reason.
Your callback handler for a close message MUST return {close, CloseReason} where CloseReason is either the atom normal for a normal close (which results in the status code 1000 being returned to the client) or another legal numerical status code allowed by RFC 6455. Note that CloseReason is unrelated to any Reason value passed by the client. Technically CloseReason can also be any other Erlang term, in which case Yaws returns status 1000 and passes the term to erlang:exit/1 to exit the Erlang process handling the web socket, but based on RFC 6455 we suggest simply returning the atom normal for CloseReason in all cases.
Original answer, obsoleted by Yaws github commit 16834c:
Yaws never passes a {close, Reason} message to your callback module. Rather, {close, Reason} is a valid return value from handle_message/1 should your callback module decide it wants to close the ws socket.
I modified the websockets_example.yaws file shipped with Yaws (version 1.92) to call this._ws.close() in the client if the user enters the "bye" message on the web page, and added an alert to the _onclose function to show that the onclose event is triggered. In this case the alert occurred, I believe because the "bye" message causes the server to close the ws socket explicitly. But I then modified the example to call this._ws.close() in the client no matter what message the user enters, and in that case no alert for onclose occurred. In this case, a check with lsof showed the ws connection from the browser to Yaws was still present.
So, for now I believe you've hit a bug where the Yaws websockets support isn't detecting the client close and closing its end. I'll see if I can fix it.

Resources