I have a TFDConection running on a track every 10 seconds. It checks the connection to a remote PostgresSQL database. When I'm debugging, if the server goes down, the connection error is triggered constantly, disrupting my debugging process. How can I silently receive this error in Delphi?
Related
I am facing a strange problem. After a variable amount of time, my quickfixj session simply hangs without any error. The session seems to be locked because I cannot write anything to it (without exceptions on the log file) nor the server sends anything to me. The thread socket initiator shows that the sessions is logged in, but I cannot write anything on the socket because it seems to be locked.
Any suggestions?
TL;DR: Rails-server restarts aren't handled gracefully in Action Cable, resulting in corrupted state. How to fix that?
I run a Rails server which uses Action Cable, among other things.
When I change a file, the server restarts as expected. But it doesn't exactly restart gracefully.
The Action Cable connection logs {"type":"disconnect","reason":"server_restart","reconnect":true} in the established socket. Then the socket appears to be cut and my channel's unsubscribed method does not run. Upon reconnect, however, the subscribed method does run—as it should, but now we have an asymmetry. Something as simple as keeping track of the number of connected clients has become unreliable because upon a server restart that number will now double although the true number remained the same. That's because it gets to increment the count when subscribed is called, but decrementing the counter happens in unsubscribed, which is not called when the server restarts.
If this only happened in development, it wouldn't be a huge issue, but it happens in production as well during server restarts caused by deployments.
How do I tell Rails to restart the server gracefully, by which I mean it should also run the unsubscribed method in my channel instead of just cutting the connection?
EDIT: I've also filed an issue with Rails: https://github.com/rails/rails/issues/41005
I have an application that uses mqtt for communication between modules and with a mobile terminal.
In some situations when the messages do not arrive, the node performs a self test of MQTT (sending a msg to itself), and when the selftest fails, tries to reconnect to the broker (mqtt offline not always arrives). And then two problems may arise:
If I perform a mqtt.client:close() to assure that the client is closed (to avoid the second problem) and the client is already closed, the node resets.
If I perform a mqtt.client:connect() and the client is still connected, an exception and a restet occurs.
is there a way to know if the mqtt client is connected or not?
Thanks for your comment. I am going to describe what I am doing, to see if you can help me:
I have two independent system, a master and a slave. The master publish a test message every 10 minutes. If there is no answer from the slave. it publish a test message to itself. If this self test does not arrive, a disconnection from the broker is assumed, and a reconnection is initiated.
And here is where the problem comes, sometimes the client is disconnected and everything go well, but sometimes it is still connected but unresponsive, and the node resets with an exception "already connected".
Performing a mqtt:close() previously to the reconnection, should be safe, but if I send it and the client is truly disconnected, the node resets without any reason (known to me).
All this is happening without receiving any offline message.
Instead of waiting for messages manually sent by the master client (which could fail to send for various reasons, leading a listening device to the wrong conclusion about the state of its connection to the broker), I recommend using MQTT's built-in connection management.
First, you can make sure that each client's initial connection has succeeded by including an error handler in :connect(). If the client really opens, nothing in the NodeMCU documentation indicates that it will close itself; it may go offline.
Once connected, the client only knows that something is wrong when it sends a message and does not receive a response. It sounds like you are not calling :publish() much (which would otherwise let you know by returning false), so pinging may be best. If you expect to receive a message from the broker every n seconds, set a keepalive time slightly higher than that on the client.
Then, failure to get a response to those messages should trigger an event that you can respond to. That might be something like the following (not tested, may work better called outside the callback):
m:on("offline", function(client) m:close() end)
I am running Wildfly 8.2 instance with HornetQ messaging remotely accessible via HTTPS on port 8185.
For testing the connection I am running a client on the same machine connecting via https-remoting://localhost:8185
from client view everything works fine: connecting, sending / receiving messages and closing connection
on server side at first all works fine, too. However, after period set in "connection-ttl" of RemoteConnectionFactory has passed, server logs following lines:
2015-09-03 17:05:49,152 WARN [org.hornetq.core.client] (hornetq-failure-check-thread) HQ212037: Connection failure has been detected: HQ119014: Did not receive data from /192.168.160.83:63937. It is likely the client has exited or crashed without closing its connection, or the network between the server and client has failed. You also might have configured connection-ttl and client-failure-check-period incorrectly. Please check user manual for more information. The connection will now be closed. [code=CONNECTION_TIMEDOUT]
2015-09-03 17:05:49,154 INFO [org.hornetq.core.server] (hornetq-failure-check-thread) HQ221021: failed to remove connection
Final result after testing for a longer time (every 1-2 seconds clients are connecting, sending / receiving messages, closing the connection):
Wildfly consumes more and more heap memory and finally stops working with an OutOfMemoryError ...
As mentioned, the connections are always explicitly closed by client, and at closing time no error is logged, neither on client nor on server side. It seems that the "hornetq-failure-check-thread" just didn't get informed that the connection was already closed
Any help for this issue is appreciated!
I have a rails 3.2.11 app deployed on Heroku that has been fairly stable over time. In the last 24 hours, pingdom has been reporting Timeouts which I can't find any "H1X" related errors in the logs at the same time.
I am occassionally able to reproduce the timeouts in google chrome. where I would get this message after about 30 seconds of requesting any page:
Chrome browser error
No data received
Unable to load the webpage because the server sent no data.
Here are some suggestions:
Reload this webpage later.
Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data.
The app will then begin serving requests normally until it happens again.
I know this is not enough info, but I can't find anything useful yet in newrelic or scanning the logs that correlates to when the error occured.
In one instance, i was reproducing the error in the browser while viewing the heroku logs and when the timeout occurred, there was no evidence of the request showing up in the logs. Its like the failed requests never make it into the app.