I am creating an iphone application, the application will use tcp sockets for server side communication, now in my case i will be listening the server using a socket for server side response, and there can be multiple responses in json format at a time which will needed to be displayed in a table view, now i am confused how to cater (parsing response and displaying data) multiple responses at a time, should i use a thread for each incoming response, or use a mutex lock type strategy, so that first response catered first time then the next one, plz. guide, any example with code can be very helpful. thanks in advance.
Related
Let's say we have an app that displays some kind of dashboard. This dashboard however should be updated extremely often(say at every 500ms). I'm familiar with long pull requests and know how I could implement them with NSURLConnection in some background thread. However it seems this will lead to two big problems - request/response concurrency and overhead of long pull requests at such short time intervals. Although first problem can be solved with some techniques, I think such frequent requests to a server is a general problem.
So after some research I found NSStream class, and it's descedants NSInputStream & NSOutputStream. My idea is to make connection to server and keep it alive for the whole time. And just at 500ms intervals to send GET request at output stream and read data from the input stream.
So here are my questions:
Am I on the right track for implementing this?
Should the server be prepared on some special way of dealing with this kind of connections(I mean won't it drop the connection after some timeout)?
Is there real benefit of skipping connection establishing to improve app performance and to lower refresh time at the dashboard?
UPDATE
I've implemented classic way. When I hit the method for requesting if previous request not yet finished I'm cancelling it. So basically I've only one active connection at a time to prevent concurrency. Also if I didn't receive response for 500ms I do not need this response at all, as it will be outdated anyway. I'm accomplishing pretty neat results in both Wi-Fi and 3G. As I expected on edge there is dropped response every 3 to 4 requests.
Still wondering however about the streams. I did try to follow this apple ref, but when I send HTPP GET via output stream, my input stream return 403 Forbidden from the server. This could be entirely server problem, however I'm not sure if this is the right track and whether it's worthy to change server side.
Q1) Am I on the right track for implementing this?
A) I'd suggest WebSockets
Q2)Should the server be prepared on some special way of dealing with
this kind of connections(I mean won't it drop the connection after
some timeout)?
A)Even though you could try Configuring
Persistent(Keep-Alive)Connections on webserver to do it easily
I'd suggest WebSockets
Q3)Is there real benefit of skipping connection establishing to
improve app performance and to lower refresh time at the dashboard?
A)Yes,Connection opening and closing are costly process that's why
there are Keep-alive connection and Google also introduced SPDY
for Webapps.so Sockets would solve this problem for you.
WebSockets
is good way to go.
Frequent polling is not a way to go because you contact the server very frequently 0.5 seconds
WebSocket provides full-duplex communication.Additionally, WebSocket enables streams of messages on top of TCP. TCP alone deals with streams of bytes with no inherent concept of a message
The WebSocket protocol was standardized by the IETF as RFC 6455 in 2011, and the WebSocket API in Web IDL is being standardized by the W3C
WebSocket is designed to be implemented in web browsers and web servers, but it can be used by any client or server application. The WebSocket Protocol is an independent TCP-based protocol. Its only relationship to HTTP is that its handshake is interpreted by HTTP servers as an Upgrade request. The WebSocket protocol makes more interaction between a browser and a website possible, facilitating live content and the creation of real-time games. This is made possible by providing a standardized way for the server to send content to the browser without being solicited by the client, and allowing for messages to be passed back and forth while keeping the connection open. In this way a two-way (bi-directional) ongoing conversation can take place between a browser and the server
You can find more about WebSockets here
Here are some good WebSocket client libraries on Objective C
SocketRocket and
UnittWebSocketClient
Note:
These libraries use NSStream
Hope this helps
As long as your server is HTTP server, server disconnect you after returning result.
So, if you want to keep connection alive long enough, you must implement your own protocol based on NSStream/Socket both iOS and Server.
You may choose famous socket based protocol like WebSocket, Square's SocketRocket is famous library for iOS based on NSStream.
If your dashboard needs real time update, I think it's worth deploying NSStream/Socket based protocol.
Is it possible corba service response one request multiple times?
In my case, corba service collects a bunch of data which takes a long time for request. In order to reduce delay client receives response, we want service responses soon when the size of collected data reach 1024k. For example, the total data sizes is 10M, service responses client 10 times on one connection.
My understanding is that corba server should cache connection between client and server, and deliver new data on this cached one once a new data is available. The client, on other handle, should do while loop for incoming response. Either client or server should not close connection until server says all data is connected. This procedure is similar to that in chunked response in Http protocol.
I appreciate if you can provide some tips or sample links in this area.
The CORBA server side is only able to send the data to the client when the server application code returns the function call. If you have just one operation in IDL that returns 10M, than the ORB can only transmit that data to the client when the operation has finished. In order to allow the ORB to send the data it has you have to modify your IDL and add a way for the client to start the operation and than poll for chunks of data to be available. Each implementation of the poll than returns with one chunk.
Some examples of how to do this are part of TAO. You can find the examples under ACE_wrappers/TAO/examples/Content_Server. It also has an example where the server pushes the data to the client when a chunk is available.
so I'm making an iOS app, but this is more of a general networking question.
So what I have is one phone that acts as the server and then a bunch of phones connect to the phone as the client. Basically it's a game/music sharer.
It's kind of hard to really get into the semantics of it, but that isn't important.
What is important is that the server and client are repeatedly sending each other commands and positions rapidly over a TCP connection, and sometimes the client wants to send the server a music file (4MB usually) to play as the music.
The problem I initially encountered was that when sending the large file, it would hang the sending of commands from the client to the server.
My naive solution was to create another socket to connect to the server to send the file to the server, the server would check the IP of the new socket, and if it has the IP of an existing connection then it would just tie it to that connection, receive the file, and then disconnect the socket.
But the problem with this is that it takes a 1-2 second delay for the socket to connect, and I'm aware that there are man-in-the-middle attacks that can occur.
Is there a more elegant solution to this problem?
I would not call your solution naive, this is largely how FTP works, separating data and control paths is a good design pattern in my view.
I wouldn't worry about the man in the middle thing. If you wanted, you could add a command to the client that it responds to over the data connection with a secret the server supplies, this would let you associate the connections without using the ip addressing.
If the delay is a problem then why not establish both connections at the start, the overhead of a few tcp connections on an operating system is not usually significant.
You could also use the two connections for both commands and data, alternating between them. Since both the server and client know when a connection is busy they can choose to use the idle one. The advantage of this is that it will keep both connections busy to ensure they are both known to be working.
You probably should also use a different thread for each socket but I suspect you are doing this since it won't work too well without it.
I am using TIdCmdTCPClient and TIdCmdTCPServer. Suddenly I find that I might like to have bi-directional communication.
What would be best? Should I possibly use some other components? If so, which? Or should I kludge and have the 'client' poll the 'server' to ask if it wishes to communciate anything?
This is a very small system. Two clients and ten servers, with a burst of one tarnscation every 30 to 60 seconds for a few minutes once a day, so overhead for polling is inconsequential.
I'm just woder if there is a 'correct' way.
Update: this really is an incredibly simple system. Very little traffic and all of it simple. All transmissions are an indication of even type an an optional single parameter.
<event type> [ <parameter>] e.g. "HERE_IS_SOME_DATA 42"
This can be sent in both directions, hover here is no "reply" as such. Just fire off a message (and hope that it got there)? Receive an Ack with no data? Non-catching of an exception indicates that message was successfully sent?)
Would it be possible (would it be overkill) to use two TIdCmdTCPServer?
Both TIdCmdTCPClient and TIdCmdTCPServer continuously poll their socket endpoints for inbound data during the lifetime of the connection. You do not have to do anything extra for that. So, as soon as a TIdCmdTCPClient connects to the TIdCmdTCPServer, both components will initially be in a reading state until one of them sends a command to the other.
Now, there is a problem with doing that - as soon as either component sends that first command, the receiving component will interpret it as a command and send back a reply, which the other component will interpret as a command and send back a reply, which will be interpretted as a command and send back a reply, and so on, causing an endless cycle of replies back and forth. For that reason, it is not wise to use TIdCmdTCPClient and TIdCmdTCPServer together. You should either use TIdTCPClient with TIdCmdTCPServer, or use TIdCmdTCPClient with TIdTCPServer. Depending on what exactly your protocol looks like, you may have to forgo using TIdCmdTCPClient and TIdCmdTCPServer altogether and just use TIdTCPClient with TIdTCPServer so you have more control over reading and writing on both ends. It is hard to answer with actual code without first knowing what the communication protocol should look like.
A single TCP socket connection can be used in two directions. The server can send data asynchronously to the client at any time. It is up to the client however to read the socket, for asynchronous processing this is done in a listener thread which reads from the socket and synchronizes incoming data operations with the main worker thread.
An example use case in the Indy components is the Telnet client component (TIdTelnet) which has a receive thread listening for server messages.
But you also asked about the 'correct' way - and then the answer depends on other factors such as network stability, guaranteed delivery and how to handle temporary server outages. In enterprise environments, one central messaging hub is preferred in many use cases, so that all parties connect only to this central server which is only responsible for reliable message delivery, and keeps messages until the recipient is available.
You can download the INDY 10 TCP server demo sample code here.
I'm looking to detect local connection loss. Is there a mean to do that, as with the events on the Corelabs components ?
Thanks
EDIT:
Sorry, I'm going to try to be more specific:
I'm currently designing a prototype using datasnap 2009. So I've got a thin client, a stateless server app and a database server.
What I would be able to do is to detect and handle connection loss (internet connectivity) between the client and the server app to handle it appropriately, ie: Display an informative error message to the user or to detect a server shutdown to silently redirect on another app server.
In 2-tier I used to manage that with ODAC components, the TOraSession have some events to handle this issues.
Normally there is no event fired when a connection is broken, unless a statement is fired against the database. This is because there is no way of knowing a connection loss unless there is some sort of is-alive pinging going on.
Many frameworks check if a connection is still valid by doing a very small query against the server. Could be getting the time from a server. Especially in a connection pooling environment.
You can implement a connection checking function in your application in some of the database events (beforeexecute?). Or make a timer that checks every 10 seconds.
Spawn a thread on the client which periodically sends some RPC 'Ping' or 'Heartbeat' commands to the server.
if this fails, the client knows that something happened to the connection
if the server does not hear the client anymore for some time period (for example, two times the heartbeat interval), he can conclude that the client disconnected, however this requires a stateful server (and your design is stateless so it would require event processing in a secondary system, which could be fed through a message queue)