Asyncronously send file over TCP connection - ios

so I'm making an iOS app, but this is more of a general networking question.
So what I have is one phone that acts as the server and then a bunch of phones connect to the phone as the client. Basically it's a game/music sharer.
It's kind of hard to really get into the semantics of it, but that isn't important.
What is important is that the server and client are repeatedly sending each other commands and positions rapidly over a TCP connection, and sometimes the client wants to send the server a music file (4MB usually) to play as the music.
The problem I initially encountered was that when sending the large file, it would hang the sending of commands from the client to the server.
My naive solution was to create another socket to connect to the server to send the file to the server, the server would check the IP of the new socket, and if it has the IP of an existing connection then it would just tie it to that connection, receive the file, and then disconnect the socket.
But the problem with this is that it takes a 1-2 second delay for the socket to connect, and I'm aware that there are man-in-the-middle attacks that can occur.
Is there a more elegant solution to this problem?

I would not call your solution naive, this is largely how FTP works, separating data and control paths is a good design pattern in my view.
I wouldn't worry about the man in the middle thing. If you wanted, you could add a command to the client that it responds to over the data connection with a secret the server supplies, this would let you associate the connections without using the ip addressing.
If the delay is a problem then why not establish both connections at the start, the overhead of a few tcp connections on an operating system is not usually significant.
You could also use the two connections for both commands and data, alternating between them. Since both the server and client know when a connection is busy they can choose to use the idle one. The advantage of this is that it will keep both connections busy to ensure they are both known to be working.
You probably should also use a different thread for each socket but I suspect you are doing this since it won't work too well without it.

Related

What is the best practice for IMAP connection?

I think that the concept of connection is not fully clear to me.
I am building a small read-only webmail for a project, and I am using the net/imap library of ruby.
Should I open a connection, authenticate, do the action and disconnect each time?
Or should I open a connection and passing around for my application?
Can someone clear to me the concept of IMAP connection?
I have see that a lot of client open multiple connection at the same time, why?
An IMAP connection is expensive enough that if you'll want to keep it if you're going to use it again in the next seconds (perhaps even minutes). It contains much more state and is much more expensive to set up than the HTTP connections with which you're probably familiar.
However, IMAP connections die randonly. So many NAT middleboxes are surprised when a TCP connection remains quiet for three minutes, as IMAP connections often do. So you'll probably want to accept that the connections can die, and reopen if necessary.

So many persistent connections to the server. Is that the right way?

I would like to understand networking services with a large user base a bit better so that I know how to approach a project I am busy with.
The following statements that I make may be incorrect but they still lead to the question that I want to ask...
Please consider Skype and TeamViewer clients. It seems that both keep persistent network connections open to their respective servers. They use these persistent connections to initiate additional connections. Some of these connections are created by means of Hole Punching if the clients are behind NATs. They are then used for direct Peer-to-Peer communications.
Now according to http://expandedramblings.com/index.php/skype-statistics/ there are 300 million users using Skype and 4.9 million daily active users. I would assume that most of that 4.9 million users will most probably have their client apps running most of the day. That is a lot of connections to the Skype servers that are open at any given time.
So to my question; Is this feasible or at least acceptable? I mean, wouldn't it be better to not have a network connection open while idle and aspecially when there are so many connections open to the servers at once? The only reason I can think is that it would be the only way to properly do Hole Punching. Techically, how is this achieved on the server side?
Is this feasible or at least acceptable?
Feasible it certainly is, you mention already two popular apps that do it, so it is very doable in practice.
As for acceptable, to start no internet authority (e.g. IETF) has ever said it is unacceptable to have long-lived connections even with low traffic.
Furthermore, the only components for which this matters are network elements that keep connection/flow state. These are for sure the endpoints and so-called middleboxes like NAT and firewalls. For the client this is only one connection, the server is usually fine tuned by the application developers (who made this choice) themselves, so for these it is acceptable. For middleboxes it's simple: they have no choice, they're designed to just work with all kind of flows, including long-lived persistent connections.
I mean, wouldn't it be better to not have a network connection open while idle and aspecially when there are so many connections open to the servers at once?
Not at all. First of all, that could be 'much' slower as you'd need to set up a full connection before each control-plane call. This is especially noticeable if your RTT is big or if the servers do some complicated connection proxying/redirection for load-balancing/localization purposes.
Next to that this would historically make incoming calls difficult for a huge amount of users. Many ISP's block/blocked unknown incoming connections from the internet by means of a firewall. Similar, if you are behind a NAT device that does not support UPnP or PCP you can't open a port to listen on for your public IP address. So you need it even aside from hole-punching.
The only reason I can think is that it would be the only way to
properly do Hole Punching. Techically, how is this achieved on the
server side?
Technically you can't do proper hole-punching as soon as the NAT devices maintain a full <src-ip,src-port,dest-ip,dest-port,protocol> (classical 5-tuple) flow match. Then the best you can do with 'hole punching' is set up a proxy between peers.
What hole-punching relies on is that the NAT flow lookup is only looking at <src-ip,src-port,protocol> upstream and <dest-ip,dest-port,protocol> downstream to do the translation. In that case both clients just set up a connection to the server, their ip and port gets translated and the server passes this to the other client. The other client can now start sending packets to that translated <ip,port> combination which should work because NAT ignores the server's ip/port. But even if the particular NAT would work like this, some security device (e.g. stateful firewall) might detect session hi-jacking and drop this anyway.
Nowadays you rather use UPnP to open up a port to listen on your public IP which is much easier if supported.

TCP/IP long-term connections

I have a server application which runs on a Linux machine. I can connect this application from Windows/Linux machines and can send/recieve data. After a few hours, something occurs and I get following error on the client side.
On Windows: An existing connection was forcibly closed by the remote host
On Linux: Connection timed out
I have made a search on the web and found some posts which suggest to increase/decrease OS's keep alive time. However, it didin't work for me.
Can I found a soultion to this problem or should I simply try to reconnect to the server when the connection is forcibly closed?
EDIT: I have tracked the situation. I sent a data to the remote node and sent another data after waiting 5 hours. Sending side sent the first data, but whet the sender sent the second data it didn't response. TCP/IP stack of the sender repeated this 5 times by incrementing the times between retries. Finally, sender reset the connection. I can't be sure why this is happening (Maybe because of a firewall or NAT - see Section 2.4) but I applied two different approach to solve this problem:
Use TCP/IP keep alive using setsockopt (Section 4.2)
Make an application level keep alive. This is more reliable since the first approach is OS related.
It depends on what your application is supposed to do. A little more information and perhaps the code you use for listening and handling connections could be of help.
Regardless, technically a longer keep alive time, should prevent the OS from cutting you off. So perhaps it is something else causing the trouble.
Such a thing could be router malfunction or traffic causing your keep-alive packet to get lost.
If you aren't already testing it on a LAN (without heavy trafic) I suggest doing so.
It might also be due to how your socket is handled (which I can't determine from your question)
This article might help.
Non blocking socket with timeout
I'm not used to how connections are handled on Linux, but I expect the OS won't cut off a connection unnecessary.
You can re-establish connection as a recovery, but you need to take into account that not all disconnects are gentle, and therefore you could end up making recovery on a connection you actually wish to be closed.
Since it is TCP, it will do its best to make a gentle disconnect, but you can send a custom message telling the server or client not to re-establish the connection right before disconnecting. That way you be absolutely sure, despite that it should be unnecessary to do so.

Should I be afraid to use UDP to make a client/server broadcast talk?

I spent the last two days reading each StackOverflow questions and answers (and googling of course) about Indy TCP and UDP protocol in order to decide which one should I use in my communication method between my User Application and my Windows Service.
From what I saw so far, UDP is the easiest and the only one I managed to work to receive broadcast messages from TidUDPClient (I did not testes the response back yet). And I also noticed that TCP is a bit more complicated with it's thread loop.
But since everywhere I am told UDP is not reliable, UDP is not reliable... I begin to wonder if it's not better to use TCP anyway.
My User Application will be running on many machines, and the Service will be running in one of them, sharing one IP with a Client, or in a dedicated machine, depending on my client's funds. So, should I really be worried about UDP data loss possibilities?
I need broadcast capabilities so my server advises all clients at once about Application updates, and of course, if my the Client Application does not know in which IP the Service/Server is, it will send a broadcast call to be told where the server is. Is that applicable to TCP?
The messages I am sending are requests for users access confirmation, users privileges, and application executable file updates, since the main application can't update itself.
Those messages are encrypted like below, and they might bet bigger sometimes.
e86c6234bf117b97d6d4a0c5c317bbc75a3282dfd34b95446fc6e26d46239327f2f1db352b2f796e95dccd9f99403adf5eda7ba8
I decided to use them both!
Simple use case:
In order to communicate with TCP prococol you have to establish a connection which you can have only if you know IP and Port on both ends.
If you do not have that information when you load your Application, then you use the UDP to Broadcast your IP address and your intention to find the/a Server. You may try about 5 times before you raise the user an error telling that you did not find the Server or that the Server is down.
Sending that message in UDP will (one time or other) reach the UDP ear of the Server, which will now know the IP from the lonely Client's IP and will now begin a proper connection via TCP to be read talk about the critical messages of the Application.
What do you think of that approach?

Erlang: Sending on a closed connection

If a client connects to a server over a normal tcp connection, and then later on the client's connection cuts out, the server will get (assuming active mode) {tcp_closed,Socket}. But there are cases where the server won't know that the client has disconnected, such as power failure or crashing and such (I believe, I could be wrong). In these cases, the client is gone but the server still believes it's connected. If the server attempts to send the client a message in these cases, will it assume that the client gets the message or will the tcp stack sort that out on the low level and the server gets back some kind of error?
I know this is a simplistic question, but I've been having trouble testing it myself, as I can't get a client to catastrophically fail like I need it to (even kill -9 isn't doing it). Does anyone have any experience with this?
The answer depends. When you try to send out data, the kernels TCP window will slowly fill until it can't take any more data. Then your send will block because the internal kernel buffer is full. TCP has some timers which will trigger after some time. When that happens, the kernel will error the send request, Erlangs VM runtime will transform it into {error, Reason}, where Reason is the posix() error message from the underlying system.
If you want to be sure the data got through, you have to acknowledge it on the stream the other way. Or you can make the data idempotent so you can resend it without trouble. It is especially important if the other endpoint, the client, is a device like a mobile phone where disconnects will happen all the time.
To test it, you can block the communication with a firewall rule on lo.

Resources