Pusher - how to set SERVER ping timeout? - timeout

Say user A and user B are subscribed to a presenceChannel.
A disables his wifi.
B's presenceChannel.getUsers().size() still shows 2, even after one or two minutes.
B will receive a userUnsubscribed event only after 7 mins.
The below
options.setActivityTimeout(long ms);
options.setPongTimeout(long ms);
Sets client timeouts, so they dont help.
Is there a way to shorten the server ping timeputs, so that B receives a quick userUnsubscribed event?

As per this issue, you can't set the server timeout from the client.
There is not a way for a client to set the server-side timeout, no. We are considering reducing it across the board in order to let other clients know more quickly about disconnections.

Related

Simulating ESP8266 server multithreading

Can't seem to find this anywhere online, so I figure I'll ask here.
I have multiple ESP8266 modules that I'm using for my home security system. My current setup works beautifully: my acquisition server polls my ESP8266 units and asks for their current I/O states (if a door is open or closed). My system polls my 3 ESP8266 modules once per second. I would like to change my setup to allow for the 8266s to keep local alarms (do local I/O checking), while my system only polls once per minute or so.
My problem is that I am having trouble programming the 8266 modules to check their own I/O status and set alarms, etc, while simultaneously running a web server to receive the polling requests from my acquisition server.
I've tried quite a few things, but can't quite seem to make it work. Here's a sample test that I've run to see if I can make the WHILE loop work simultaneously with the server.
wifi.setmode(wifi.STATIONAP)
wifi.setmode(wifi.STATION)
wifi.sta.setip({ip="192.168.1.110",netmask="255.255.255.0",gateway="192.168.1.254"})
wifi.sta.config("someSSID","somePW")
srv=net.createServer(net.TCP)
someint = 5
a = 1
while a do
someint = someint + 1
srv:listen(8080,function(conn)
conn:on("receive", function(client,request)
client:send(someint);
client:close();
collectgarbage();
end)
end)
end
It doesn't have the expected results (the response incrementing from 5,6,7,etc.) I've tried other conditions on my WHILE loop such as "while true do", etc., but nothing seems to work.
For simplicity's sake, I'll ask this and figure it out from there...is there some way to make the number someint increment while also listening for incoming connections? I don't want the number to only increment once a new connection comes in, I want it to increment forever until a new connection comes in, then continue once the connection is closed.
If you need me to elaborate, I'd be happy to. Thanks in advance.

What's the upper bound connections of TServerSocket in Delphi? [duplicate]

I'm building a chat server with .NET. I have tried opening about 2000 client connections and my Linksys WRT54GL router (with tomato firmware) drops dead each time. The same thing happens when I have several connections open on my Azureus bit-torrent client.
I have three questions:
Is there a limit on the number of open sockets I can have in Windows Server 2003?
Is the Linksys router the problem? If so is there better hardware recommended?
Is there a way to possibly share sockets so that I can handle more open client connections with fewer resources?
AS I've mentioned before, Raymond Chen has good advice on this sort of question: If you have to ask about OS limits, you're probably doing something wrong. The IP protocol only allows for a maximum of 65535 ports and many of these are reserved and not available for general use. I would suggest that your messaging protocols need to be thought out in more detail so that OS limits are not an issue. I'm sure there are many good resources describing such systems, and there are certainly people here that would have good ideas about it.
EDIT: I'm going to put some thoughts about implementing a scalable chat server.
First off, designate a single port on the server for clients to communicate through. Whenever a client needs to update the chat state (a new user message for example) do the following:
create message packet
open port to server
send packet
close port
The server then does the following:
connection request received
get packet
close connection
process packet
for each client that requires updating
open connection to clients
send update packet
close connection
When a new chat session is started, the client starting the session sends a 'new session' message to the server with the clients user details and IP address for responses. The server creates a new chat session and responds with the session ID. The client then sends packets containing the messages the user types, the server processes them and forwards the message to other clients in the same session. When a client leaves the chat, it sends a 'end session' message to the server. The server removes the client from the session and destroys the session when there are no more clients in the session.
Hope that gets you thinking.
i have found some answers to this that i feel i should share:
Windows 2003 server has a limit on the number of ports that may be used. but this is configurable via a registry tweak to change the MaxUSerPort setting from 5000 to say, 64k( max).
Exploring further, i realize that the 64k port restriction is actually per IP address, hence a single server can easily attain much more ports, and hence TCP connections by either installing multiple network cards, or binding more than one IP address to a network card. that way, you can scale your system to handle n x 64k ports.
Had for days a problem with the available sockets on my Window 7 machine. After reading some articles about socket leaks in Win 7, I applied a Windows patch - nothing changed.
Below there is an article describing windows connection problems in great detail:
http://technet.microsoft.com/en-us/magazine/2007.12.network.aspx
For me it worked the following:
Open Regedit
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\Parameters: Create TcpNumConnections, REG_DWORD, decimal value 500 (this can be set according to your needs); EnableConnectionRateLimiting, REG_DWORD, value 0;
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip: Create MaxUserPort, REG_DWORD, decimal value 65534
Restart Windows

Best practice: Keep TCP/IP connection open or close it after each transfer?

My Server-App uses a TIdTCPServer, several Client apps use TIdTCPClients to connect to the server (all computers are in the same LAN).
Some of the clients only need to contact the server every couple of minutes, others once every second and one will do this about 20 times a second.
If I keep the connection between a Client and the Server open, I'll save the re-connect, but have to check if the connection is lost.
If I close the connection after each transfer, it has to re-connect every time, but there's no need to check if the connection is still there.
What is the best way to do this?
At which frequency of data transfers should I keep the connection open in general?
What are other advantages / disadvantages for both scenarios?
I would suggest a mix of the two. When a new connection is opened, start an idle timer for it. Whenever data is exchanged, reset the timer. If the timer elapses, close the connection (or send a command to the client asking if it wants the connection to remain open). If the connection has been closed when data needs to be sent, open a new connection and repeat. This way, less-often-used connections can be closed periodically, while more-often-used connections can stay open.
Two Cents from experiment...
My first TCP/IP client/server application was using a new connection and a new thread for each request... years ago...
Then I discovered (using ProcessExplorer) that it consummed some network resources because all closed connection are indeed not destroyed, but remain in a particular state for some time. A lot of threads were created...
I even had some connection problems with a lot of concurent requests: I didn't have enough ports on my server!
So I rewrote it, following the HTTP/1.1 scheme, and the KeepAlive feature. It's much more efficient, use a small number of threads, and ProcessExplorer likes my new server. And I never run out of port again. :)
If the client has to be shutdown, I'll use a ThreadPool to, at least, don't create a thread per client...
In short: if you can, keep your client connections alive for some minutes.
While it may be fine to connect and disconnect for an application that is active once every few minutes, the application that is communicating several times a second will see a performance boost by leaving the connection open.
Additionally, your code will be much simple if you aren't trying to constantly open, close, or diagnose an open connection. With the proper open and close logic, and SEH around your read and writes, there's no reason to test if the socket is still connected before using, just use it. It will tell you when there is a problem.
I'd lean towards keeping a single connection open in most enterprise applications. It generally will lead to cleaner code, that is easier to maintain.
/twocents
I guess it all depends on your goal and the amount of requests made on the server in a given time not to mention the available bandwidth and the hardware on the server.
You need to think for the future as well, is there any chance that in the future you will need connections to be left open? if so, then you've answered your own question.
I've implemented a chat system for a project in which ~50 people(the number is growing with each 2 months) are always connected and besides chatting it also includes data transfer, database manipulation using certain commands, etc. My implementation is keeping the connection to the server open from the application startup until the application is closed, no issues so far, however if a connection is lost for some reason it is automatically reestablished and everything continues flawlessly.
Overall I suggest you try both(keeping the connection open and closing it after it's being used) and see which fits your needs best.
Unless you are scaling to many hundreds of concurrent connections I would definitely keep it open - this is by far the better of the two options. Once you scale past hundreds into thousands of concurrent connections you may have to drop and reconnect. I have architected my entire framework around this (http://www.csinnovations.com/framework_overview.htm) since it allows me to "push" data to the client from the server whenever required. You need to write a fair bit of code to ensure that the connection is up and working (network drop-outs, timed pings, etc), but if you do this in your "framework" then your application code can be written in such a way that you can assume that the connection is always "up".
The problem is the limit of threads per application, around 1400 threads. So max 1300 clients connected at the same time +-.
When closing connections as a client the port you used will be unavailable for a while. So at high volume you’re using loads of different ports. For anything repetitive i’d keep it open.

What is the best algorithm/technique to control client connections to the server?

I have over 50 clients connected to one server (low end server, running windows 2003 server), every time there is a power failure or switch failure the clients will disconnect from the server, the server might remain on during this incidents (if power backup is installed), when the clients came back they automatically detect the server and initiate a connection procedure, at this point the server will start dishing out the relevant data to the clients. Its at this point you realize some clients will start freezing becouse the server is not quick enough to dish out data and so it blocks the rest of the clients.
I have implemented a crude method to control this client storm but i was asking if guys out there have better algorithms to perform this kind of task.
NB: Am using Asta sockets components on a delphi application, but i dont mind examples from different fields,
Similar to network collision-detection protocols, perhaps clients could wait a random period of time before initiating their connection at startup?
In addition to the random startup delay suggested by Bremen, implement some sort of "too busy; try again later" message in your protocol. Rejecting a client with a short message should not be a problem for 50, 100, or even 1000 clients. Have the clients respond by doing a random delay and retrying + exponential backoff.
The solution depends on your preferences as well. Is it ok for you to drop down the connections request or send busy message?
Another option can be that you start sending data to the clients in sort of roundrobin manner. To this end you can have different threads responsible for sending data to different clients. Advantage of this case can be that none of the clients will be starved.

Datasnap : Is there a way to detect connection loss globally?

I'm looking to detect local connection loss. Is there a mean to do that, as with the events on the Corelabs components ?
Thanks
EDIT:
Sorry, I'm going to try to be more specific:
I'm currently designing a prototype using datasnap 2009. So I've got a thin client, a stateless server app and a database server.
What I would be able to do is to detect and handle connection loss (internet connectivity) between the client and the server app to handle it appropriately, ie: Display an informative error message to the user or to detect a server shutdown to silently redirect on another app server.
In 2-tier I used to manage that with ODAC components, the TOraSession have some events to handle this issues.
Normally there is no event fired when a connection is broken, unless a statement is fired against the database. This is because there is no way of knowing a connection loss unless there is some sort of is-alive pinging going on.
Many frameworks check if a connection is still valid by doing a very small query against the server. Could be getting the time from a server. Especially in a connection pooling environment.
You can implement a connection checking function in your application in some of the database events (beforeexecute?). Or make a timer that checks every 10 seconds.
Spawn a thread on the client which periodically sends some RPC 'Ping' or 'Heartbeat' commands to the server.
if this fails, the client knows that something happened to the connection
if the server does not hear the client anymore for some time period (for example, two times the heartbeat interval), he can conclude that the client disconnected, however this requires a stateful server (and your design is stateless so it would require event processing in a secondary system, which could be fed through a message queue)

Resources