easy to write a script to test whether the network is ever down for the next 24 hours? - network-programming

is it easy to write a script to test whether the network is ever down for the next 24 or 48 hours? I can use ssh to connect to a shell and come back 48 hours later to see if it is still connected to see if the network has ever been down, but can i do it programmatically easily?

The Internet (and your ethernet) is a packet-switched network, which makes the definition of 'down' difficult.
Some things are obvious; for example, if your ethernet card doesn't report a link, then it's down (unless you have redundant connections). But having a link doesn't mean its up.
The basic, 100 mile view of how the Internet works is that your computer splits the data it wants to send into ~1500-byte segments called packets. It then, pretty much blindly, sends them on their way, however your routing table says to. Then that machine repeats the process. Eventually, through many repetitions, it reaches the remote host.
So, you may be tempted to define up as the packet reached its destination. But, what happens if the packet gets corrupted, e.g., due to faulty hardware or interference? The next router will just discard it. Ok, that's fine, you may well want to consider that down. What if a router on the path is too busy, or the link it needs to be sent on is full? The packet will be dropped. You probably don't want to count that as down.
Packets routinely get lost; higher-level protocols (e.g., TCP) deal with it and retransmit the packet. In fact, TCP uses the packet drop on link full behavior to discover the link bandwidth.
You can monitor packet loss with a tool like ping, as the other answer states.

If your need is more adminstrative in nature, and using existing software is an option, you could try something like monit:
http://mmonit.com/monit/
Wikipedia has a list of similar software:
http://en.wikipedia.org/wiki/Comparison_of_network_monitoring_systems
You should consider also whether very short outages need to be detected. Obviously, a periodic reachability test cannot guarantee detecting outages shorter than the testing interval.
If you only care about whether there was an outage, not how many there were or how long they lasted, you should be able to automate your existing ssh technique using expect pretty easily.
http://expect.nist.gov/

Most platforms support a ping command that can be used to find out if a network path exists to an IP address somewhere "else". Where, exactly, to check depends on what you are really trying to answer.
If the goal is to discover the reliability of your first hop to your ISP's network, then pinging a router or their DNS regularly might be sufficient.
If your concern is really the connection to a single node (your mention of leaving an ssh session open implies this) then just pinging that node is probably the best idea. The ping command will usually fail in a way that a script can test if the connection times out.
Regardless, it is probably a good idea to check at a rate no faster than once a minute, and slower than that is probably sufficient.

Related

Should I use a single TIdIcmpClient to ping different addresses, or should I use multiple TIdIcmpClients?

I am using Delphi 11 FMX to write for Linux and Windows.
I have an auto-ping program working well on both platforms, using TIdIcmpClient.
But now, I want to write a program that will ping up to four different IP addresses, displaying a flag when an address goes down. I could, of course, ping one IP and then change the IP and ping another. But I have seen that the first ping is always longer, although I have usually pinged in seconds of interval, once every 5 seconds, for example. The first ping is always much higher than the rest. So, in my program, I ignore the first ping.
I am afraid that if I use one TIdIcmpClient, changing IPs before every ping, it will always be a first ping, and so will not represent the real round trip time. Or does this happen just the first time the TIdicmpClient pings?
Alternately, perhaps I should use multiple TIdIcmpClient components? But I don't know if it is okay to use multiple TIdIcmpClients. This program would ping each address perhaps once every 10 seconds at most, more likely once a minute.
I am looking for advice that will save me time when I write this program. I searched for this on StackOverflow, and elsewhere. I found some useful things here, but nothing this specific.

iOS - synching audio between devices with millisecond accuracy

I need to synch audio in networked devices with millisecond accuracy. I've hacked together something that works quite well, but is'nt perfectly reliable :
1)Server device sends an rpc with a timeSinceClick param
2)client device launches the same click offsetted according to the time the rpc spent in transit,
3)System.Diagnostics.StopWatch checks periodicaly on all connected devices to make sure playback hasn't deviated too much from absolute time and corrects if necessary
Are there any more elegant ways to do this? Also, my way of doing it requires manual synching if non iOS devices are added to the mix : latency divergences make it very hard to automate...
I'm all eyes!
Cheers,
Gregzo
It is difficult to synchronize multiple devices on the same machine with millisecond accuracy, so if you are able to do this on multiple machines I would say you are doing well. I'm not familiar enough with iOS to the steps you describe, but I can tell you how I would approach this in cross-platform way. Maybe your approach amounts to the same thing.
one machine (the "master") would send a UDP packet with to all other machines.
all other machines would reply as quickly as possible.
the time it takes to receive the reply, divided by two, would be (approximately) the time it takes to get a packet from one machine to another. (this would have to be validated. maybe it takes much longer to process and send a packet? probably not)
after repeating steps 1-3, ignoring any extreme values and averaging the remaining results, you know about how long it takes to get a message from one machine to another.
now "sync" UDP packets can be sent from the main machine to the "slave" machines. The sync packets will include delay information so that when the save machine receive the packets, they know they were sent x milliseconds ago. Several sync packets may need to be sent in case the network delays or drops some of them.

Real time audio conversation iOS

I am designing an iOS app for a customer who wants to allow real-time (with minimum lag, max 50ms) conversations between users (a sort of Teamspeak). The lag must be low because the audio can also be live music, played with instruments, so all the users need to synchronize. I need a server, which will request audio recordings to every client and send to others (and make them hear the same sound at the same time).
HTTP is easy to manage/implement and easy to scale, but very low-performing because an average HTTP request takes > 50ms... (with a mid-level hardware), so I was thinking of TCP/UDP connections kept open between clients and server.
But I have some questions:
If I develop the server in Python (using TwistedMatrix, for example), how are its performance ?
I can't develop the server in C++ because it is hard to manage (scalable) and to develop.
Anyone used Nodejs (which is easy to scale) to manage TCP/UDP connections?
If I use HTTP, will it be fast enough with Keep-Alive? Becuase usually the time required for an HTTP Request to be performed is > 50ms (because opening-closing connection is hard), and I want the total procedure to be less than that time.
The server will be running on a Linux machine.
And finally: which type of compression can you suggest me? I thought Ogg Vorbis would be nice, but if there's anything better (and can be used in iOS), I am open to changes.
Thank you,
Umar.
First off, you are not going to get sub 50 ms latency. Others have tried this. See for example http://ejamming.com/ a service that attempts to do what you are doing, but has a musically noticeable delay over the line and is therefore, in the ears of many, completely unusable. They use special routing techniques to get the latency as low as possible and last I heard their service doesn't work with some router configurations.
Secondly, what language you use on server probably doesn't make much difference, as the delay from client to server will be worse than any delay caused by your service, but if I understand your service correctly, you are going to need a lot of servers (or server threads) just relaying audio data between clients or doing some sort of minimal mixing. This is a small amount of work per connection, but a lot of connections, so you need something that can handle that. I would lean towards something like Java, Scala, or maybe Go. I could be wrong, but I don't think this is a good use-case for node, which, as I understand it, does not do multithreading well at this time. Also, don't poo-poo C++, scalable services have been built C++. You could also build the relay part of the service in C++ and the rest in whatever.
Third, when choosing a compression format, you'll have to choose one that can survive packet loss if you plan to use UDP, and I think UDP is the only way to go for this. I don't think vorbis is up to this task, but I could be wrong. Off the top of my head, I'm not sure of anything that works on the iPhone and is UDP friendly, but I'm sure there are lots of things. Speex is an example and is open-source. Not sure if the latency and quality meet your needs.
Finally, to be blunt, I think there are som other things you should research a bit more. eg. DNS is usually cached locally and not checked every http call (though it may depend on the system/library. At least most systems cache dns locally). Also, there is no such protocol as TCP/UDP. There is TCP/IP (sometimes just called TCP) and UDP/IP (sometimes just called UDP). You seem to refer to the two as if they are one. The difference is very important for what you are doing. For example, HTTP runs on top of TCP, not UDP, and UDP is considered "unreliable", but has less overhead, so it's good for streaming.
Edit: speex
What concerns the server, the request itself is not a bottleneck. I guess you have sufficient time to set up the connection, as it happens only in the beginning of the session. Therefore the protocol is not of much relevance.
But consider that HTTP is a stateless protocol and not suitable for audio streaming. There are a couple of real time streaming protocols you can choose from. All of them will work over TCP or UDP (e.g. use raw sockets), and there are plenty of implementations.
In your case, the bottleneck with latency is not the server but the network itself. The connection between an iOS device and a wireless access point (AP) eats up about 40ms if the AP is not misconfigured and connection is good. (ping your iPhone.) In total, you'd have a minimum of 80ms for the path iOS -> AP -> Server -> AP -> iOS. But it is difficult to keep that latency stable. (Typical latency of AirPlay on my local network is about 300ms.)
I think live music over iOS devices is not practicable today. Try skype between two iOS devices and look how close you can get to 50ms. I'd bet no one can do it significantly better, what concerns latency.
Update: New research result!
I have to revise my claims regarding the latency of wifi connections of the iDevice. Apparently when you first ping your device, latency will be bad. But if I ping again no later than 200ms after that, I see an average latency 2ms-3ms between AP and iDevice.
My interpretation is that if there is no communication between AP and iDevice for more than 200ms, the network adapter of the iDevice will go to a less responsive sleep mode, probably to save battery power.
So it seems, live music is within reach again... :-)
Update 2
The ping-interval required for keep alive of low latency apparently differs from device to device. The reported 200ms is for an 3rd gen. iPad. For my iPhone 4 it's more like 50ms.
While streaming audio you probably don't need to bother with this, as data is exchanged on a more frequent basis. In my own context, I have sparse communication between an iDevice and a server, but low latency is crucial. A keep alive therefore is the way to go.
Best, Peter

How do I increase the priority of a TCP packet in Delphi?

I have a server application that receives some special TCP packet from a client and needs to react to it as soon as possible by sending an high-level ACK to the client (the TCP ACK won't suite my needs).
However, this server is really network intensive and sometimes the packet will take too long to be sent (like 200ms in a local network, when a simple server application can send it in less than 1ms).
Is there a way to mark this packet with a high-priority tag or something like that in Delphi? Or maybe with the Win32 API?
Thanks in advance.
EDIT
Thanks for all the answers so far. I'll add some details. My product has the following setup: there are several devices that are built upon vehicles with WIFI conectivity. When they arrive at the garage, those device connect to my server and start to transmit data.
Because of hardware limitations, I implemented a high-level ACK to make the device aware that the last packet arrived successfully (please, don't argue about this - the data may be broken even if I got a correct TCP ACK). However, if I use my server software, that communicates with a remote database, to issue this ACK, I get very long delay (>200ms). If I use an exclusive software to do this task, I get small latencies (<1ms). So, I was imagining if I could just tell Windows to send those special packets first, as it seems to me that this package is getting delayed so the database ones can get delivered.
That's the motivation behind my question.
EDIT 2
As requested: this is legacy software and I'm using the legacy dclsockets140.bpl package and Delphi 2010 (14.0.3593.25826).
IMO it is very difficult to realize this. there are a lot of equipment and software involved. first of all, if you communicate between 2 different OS's you got a latency. second, soft and hard firewalls, antiviruses, everything is filtering/delaying your package.
you can try also to 'hack' the system(this involve some very good knowledge on how the frames/segments are packed/send,flow control,congestion,etc), either by altering it from code, either by using some tools like http://half-open.com/ or others.
In short, passing MSG_OOB flag to the send function marks the data as "urgent". Detailed discussion about the OOB in the context of Windows Sockets implementation specifics is available here.

Managing security on UDP socket

I am looking at developing my first multiplayer RTS game and I'm naturally going to be using UDP sockets for receiving/sending data.
One thing that I've been trying to figure out is how to protect these ports from being flooded by fake packets in a DoS attack. Normally a firewall would protect against flood attacks but I will need to allow packets on the ports that I'm using and will have to rely on my own software to reject bogus packets. What will stop people from sniffing my packets, observing any authentication or special structure I'm using and spamming me with similar packets? Source addresses can easily be changed to make detecting and banning offenders nearly impossible. Are there any widely accepted methods for protecting against these kind of attacks?
I know all about the differences between UDP and TCP and so please don't turn this into a lecture about that.
===================== EDIT =========================
I should add that I'm also trying to work out how to protect against someone 'hacking' the game and cheating by sending packets that I believe are coming from my game. Sequencing/sync numbers or id's could easily be faked. I could use an encryption but I am worried about how much this would slow the responses of my server and this wouldn't provide protection from DoS.
I know these are basic problems every programmer using a UDP socket must encounter, but for the life of me I cannot find any relevant documentation on methods for working around them!
Any direction would be appreciated!
The techniques you need would not be specific to UDP: you are looking for general message authentication to handle spoofing, rate throttling to handle DoS, and server-side state heuristics ("does this packet make sense?") to handle client hacks.
For handling DoS efficiently, you need layers of detection. First drop invalid source addresses without even looking at the contents. Put a session ID at the start of each packet with an ID that isn't assigned or doesn't match the right source. Next, keep track of the arrival rates per session. Start dropping from addresses that are coming in too fast. These techniques will block everything except someone who is able to sniff legitimate packets in real-time.
But a DoS attack based on real-time sniffing would be very rare and the rate of attack would be limited to the speed of a single source network. The only way to block packet sniffing is to use encryption and checksums, which is going to be a lot of work. Since this is your "first multiplayer RTS", I suggest doing everything short of encryption.
If you do decide to use encryption, AES-128 is relatively fast and very secure. Brian Gladman's reference Rijndael implementation is a good starting point if you really want to optimize, or there are plenty of AES libraries out there. Checksumming the clear-text data can be done with a simple CRC-16. But that's probably overkill for your likely attack vectors.
Most important of all: Never trust the client! Always keep track of everything server-side. If a packet arrives that seems bogus (like a unit moving Y units per second while it should only be able to mov X units per second) then simply drop the packet.
Also, if the number of packets per second grows to big, start dropping packets as well.
And don't use the UDP packets for "unimportant" things... In-game chat and similar things can go though normal TCP streams.

Resources