i am writing a simple client/server chat program with indy 10 (blocking mode) and there is a question that how can i manage connections ?
for example imagine a user that is online on server , we must make a connection tunnel for future requests . In other words, when a user is online server should not need username and password for future user requests . and it will be do with the tunnel that we created when user has came .
how can we manage connections ?
[sorry for my bad english] if you can't understand me please tell me to send a new post agian .
Thank you
For the scenario described in the question, there isn't really much management to do. To avoid having to re-authenticate on every request, simply don't close the connection. In a chat server, especially, it's quite likely that each participant will establish a connection and then continue using that same connection for the duration of the chat.
Indy server objects already keep a list of their open connections, so when you want to broadcast a chat message to the other participants, you can just iterate over that list.
I think 100000 checks per second will be the less resource consuming thing, than having 10000 persistent TCP connections. And anyway you will need to process somehow these 100000 commands, so those checks would not be the bottleneck.
Try to use the UDP messages instead. For example, most MMO games use both TCP and UDP connections. TCP only for critical data, and UDP for any other data. In your case UDP seems to be acceptable. The client can send UDP packets with some autoincrement IDs, and the server can periodically send back the list of IDs it doesn't receive, so the client can resend them.
One option would be to create a unique session ID (or "token") on the server side, for example a GUID, if a client logs in. And in every request, the client includes this token.
The server would maintain a list of client sessions and associated session data, and looks up the token in this list.
Even if a client is temporary disconnected from the Internet but still knows its token, the application can reconnect and continue the session with the server.
Related
I made an instant messaging app using MQTT protocol.
I want to add some extra data about messages in payload like sent time ( server time not client time ) and also provide kind of server side payload sanitizing.
Is it a good idea to add a third party client with superuser privileges between message sender and message receiver on broker's local machine to do this job ?
or is there any better idea ?
by the way I'm using EMQTT as message broker.
From a pure security view having direct peer to peer traffic (without filtering and sanitising) sounds like a dangerous idea. (At least in the Internet-of-things domain I would clearly object against it.)
Why? Because the clients are outside of your control (i.e. a hacker can re-engineer) and inject any traffic to exploit security holes on the receiving side of other clients.
So sanitising on the server side sounds like a very good idea.
I would suggest two topics: One (inbound) topic the clients use to publish messages, and another (outbound) topic used by clients to subscribe to messages. A server side component would then read the messages from inbound topic, sanitize it and publish to the outbound topic.
This de-coupeling makes it also easier to introduce MQTT payload changed: If you update the payload in a non-compatible way, introduce a new inbound topic and keep the old inbound topic too. This allows you to support old and new clients during the transition phase.
I have a server with a number of clients and each client is able to ask the server for information about the other clients. If they do so, the server have to get the information from each client and then return it to the asking client.
If two clients does this request at the same time, a deadlock might appear. The thing is that this request is done so often that the client would not have to care if it sometimes fails. How do I just ignore the timeout message that terminates everything when this problem appear?
Strict answer to your question
If you're using gen_server, then call/3 allows you to specify a timeout (and call/2 defaults to 5 seconds).
This code will either give you the gen_server's reply or the atom timeout if it failed.
Result = try gen_server:call(Target, Message, Timeout) of
Reply ->
Reply
catch
exit:{timeout, _} ->
timeout
end.
Better answer
evnu and rvirding recommended using asynchronous calls, which is a superior technique. Here are two possible ways to do this:
1. Server stores the data
Have clients periodically gen_server:cast/2 to the server to tell it their information. The server stores the latest information about each client. When a client wants to learn about its siblings, it calls gen_server:call/2 to the server.
The server call is synchronous because it doesn't need to contact any client; it's just returning the cached values.
2. Async return
The clients call gen_server:cast/2 to request data from the server. The server calls gen_server:call/2 to fetch data from each client on demand. Once the server has collected all data, it calls gen_server:cast/2 to pass the collected data back to the client that requested it.
Here, the clients are always waiting to handle requests from the server. The server calls the client synchronously, but can't deadlock because there is only one server.
3. More gen_servers
This one's hard to describe without knowing more about your code, but you could break the clients into more pieces. One piece to handle data requests and another piece to generate the requests.
Based on your description that the clients make this data request "so often", I think you should try the first method. If your clients are requesting data frequently enough, having the server collect and cache the client information will actually result in fresher data for the clients.
we're using CometD 2 to achieve the connection between a central data provider and several backends consuming the data. Up to now, when one of the backends fails briefly, all messages posted in the meantime are lost. Now we heard about the "Acknowledge Extension" for CometD. It is supposed to create a server-side list of messages and delivers them when one of the clients reports to be back online. Here are some questions:
1) Does this also work with several clients?
2) The documentation (http://cometd.org/documentation/2.x/cometd-ext/ack) says: "Note that if the disconnected browser is disconnected for in excess of maxInterval (default 10s), then the client will be timed out and the unacknowledged queue discarded." -> does this mean that in case my client doesn't restore within the maxInterval, the messages are lost anyway?
Hence,
2.1) What's the maximal maxInterval? Which consequences does it have to set it to a high value?
2.2) We'd need a secure mechanism for fail outs of at least a few minutes. Is this possible? Are there any alternatives?
3) Is it really only necessary to add the two extensions in both the client and cometD server? We're using Jetty for the server and .NET Oyatel for the client. Does anyone have some experiences with this?
I'm sorry for this bunch of questions, but unfortunately, the CometD project isn't really well documented. I really appreciate any answers.
Cheers,
Chris
1) Does this also work with several Clients
Yes, it does. There is one message queue allocated for each client (see AcknowledgedMessagesClientExtension).
2) does this mean that in case my client doesn't restore within the maxInterval, the messages are lost anyway?
Yes, it does. When the client can't reach the server for maxInterval milliseconds, the server will throw away all state associated with that client.
2.1) What's the maximal maxInterval? Which consequences does it have to set it to a high value?
maxInterval is a servlet parameter of the Cometd servlet. It is internally treated as a long value, so the maximal value for it is Long.MAX_VALUE.
Example configuration:
<init-param>
<!-- The max period of time, in milliseconds, that the server will wait for
a new long poll from a client before that client is considered invalid
and is removed -->
<param-name>maxInterval</param-name>
<param-value>10000</param-value>
</init-param>
Setting it to a high value means that the server will wait longer before throwing away the state associated with a client (from the time the client stops contacting the server).
I see two problems with this. First, the memory requirements of the server will potentially be higher (which may also make denial of service easier). Second, the RemoveListener isn't called on the Server before the maxInterval expires, which may require you to implement additional logic that differentiates between "momentarily unreachable" and "disconnected".
2.2) We'd need a secure mechanism for fail outs of at least a few minutes. Is this possible? Are there any alternatives?
Yes, it is possible to configure the maxInterval to last for a few minutes.
An alternative would be to restore any server side state on every handshake. This can be achieved by adding a listener to "/meta/handshake" and publishing a message to a "/service/" channel (to make sure only the server receives the message), or by adding an additional property to the "ext" property of the handshake message. Be careful to let the client restore only valid state (sign it on the server if you must).
3) Is it really only necessary to add the two extensions in both the client and cometD server?
On the server it is sufficient to do something like:
bayeux.addExtension(new AcknowledgedMessagesExtension());
I don't know how you'd do it on Oyatel. In Javascript it suffices to simply include the extension (dojo.require or script include for jQuery).
When a client with the AckExtension connects to the server, a message similar to the following will be logged (from my Jetty console log):
[qtp959713667-32] INFO org.cometd.server.ext.AcknowledgedMessagesExtension - Enabled message acknowledgement for client 51vkuhps5qgsuaxhehzfg6yw92
Another note because it may not be obvious: the ack extension will only provide server to client delivery guarantee, not client to server. That is, when you publish a message from the client to the server, it may not reach the server and will be lost.
Once the message has made it to the server, the ack extension will ensure that all recipients connected at that time will receive the message (as long as they aren't unreachable for maxInterval milliseconds).
It is relatively straightforward to implement client-side retrying if you listen to notifications on "/meta/unsuccessful" and resend the message (the original message that failed is passed as message.request to the handler).
ive a sever running TIdTCPServer, and Client Using Web Browser (or any other software) to Communicate, i dunno the protocol, but what im trying to do is to Send The Data between the client and another Connection (Both Connected to the same TIdTCPServer) for example the data sent by the first client is transmitted to the second client, and the data sent by the second client is transmitted to the first client, like a proxy (i cant really use a proxy server since its just this one condition) and the TIdTCPServer should still be receiving other clients and processing their data.
i stumbled upon the first line of code, since TIdContext.Connection.Socket.ReadLn requires a Delimiter, and the Client's Protocol is unknown to the server.
any ideas?
thanks.
You can look at the source code for TIdMappedPortTCP and TIdHTTPProxyServer to see how they pass arbitrary data between connections in both directions. Both components use TIdSocketList.SelectReadList() to detect when either connection has data to read. TIdMappedPortTCP then uses TIdBuffer.ExtractToBytes() and TIdIOHandler.Write(TIdBytes), whereas TIdHTTPProxyServer uses TIdTCPStream and TIdBuffer.ExtractToStream() instead.
If you have two browser windows open and you use each to navigate to a different website, then how does the software know which HTTP response belongs to which browser instance?
Update
It seems that the distinction is made by the inbound TCP port numbers. But what about network messages that don't involve TCP/UDP? For example, if you open two terminal applications and use both send a ping message to the same remote server, how does the reply find its way to its terminal instance?
Typically, each browser instance creates its own socket to communicate with the server. Though the outbound port of all the sockets is the same (usually TCP 80 or 443), their inbound ports are different. Thus, there are no conflicts when the server responds to the requests, since the responses are sent to different inbound ports.
Tools like ping use ICMP packets, which provide their own way to uniquely identify the calling application (a unique identifier and a sequence number).
They're usually associated with different TCP connections, which between them have used different ports on the client end. This means that the TCP stack at the client end knows the different and passes them via the sockets API the client used in an easily distinguishable way. (Typically different file descriptors)
The exception to this is pipelining where multiple http request can be sent over one connection, as an optimisation. Requests sent like this are received in the order they were sent however, making it trivial to match them to the requests.