While looking at messages in Wireshark, I have noted that Checksum validation is always disabled. Is it like an obsolete requirement, or does it apply to internet traffic only which goes outside the firewall of a company network?
Also, can someone please advice how it is set (eg., whether from an application or network card setting, etc.)
PS: The question might not be of practical significance. I am asking this to fill the large missing gaps in my poor network programming skills. I had heard that checksum validation is a major bottleneck for tcp communication, but am surprised that it is disabled for all messages that I have seen
This question is answered in the Wireshark FAQ.
The upshot is that checksums are generally calculated by network cards, and Wireshark often intercepts packets before they hit the hardware that does the actual calculation. Enabling validation for those packets results in a large number of errors, so they disabled validation by default. More info is available via the link.
Edit: just to address fruit's comment below, I screenshotted a couple of TCP packets for comparison. The first one is a TCP packet without validation:
You can see that there is a non-zero checksum there, so it might appear that Wireshark (or some other pre-hardware app) has done the checksum for you. However, when we turn validation on for this same packet..
Now we can see that this checksum wasn't valid in the first place. I can't find a source for this info, but I think it's strong evidence that Wireshark is not populating that field for us; doing so would go against Wireshark's nature anyway. Instead, I expect that this is just an uninitialized field in the packet - it takes more work to set a field to zero than to omit setting it at all.
It's also worth noting that as time goes on, more and more network stacks will be offloading checksumming to the hardware, so there will be fewer and fewer cases of valid checksums coming from the local machine.
Related
I have an iOS application that remotely connects to 3 sockets(of some hardware). Each Socket has its own priority. One channel is only used for transferring messages between iPad App & hardware, one for Tx/Rx Images, another one for Tx/Rx Videos. I had implemented all the three sockets using GCDAsyncSocket API & things worked fine while using MSGSocket/ImageSocket (OR) MSGSocket/VideoSocket, but when I start using the VideoSocket/ImageSocket/MSGSocket simultaneously this is where things go a little haywire. I Lose Packets of Data.{Actually a chunk of file goes missing :-(} I went through the API & found some bug in the API: Unable to complete Read Stream which I assumed could be a cause of problem. Hence, I Switched to threads & implemented the same using NSThreads/CFSocket API.
I changed only the implementation for ImageSocket/VideoSocket code using NSThreads/CFSocket API & here is the implementation of the same dropbox-ed. I'm just unable to understand as to where the things are going wrong whether it is at iOS App end or at the Server side. In my understanding there shall be no loss of packets in TCP Communication.
Is there a way to Debug This issue. Also I request to go through the code & let me know if any thing is wrong(I know this can be too much that I'm asking for but I need some assurance as to the code implementation is correct). Any help to resolve this issue will be highly appreciated.
EDIT 1: After #JoeMcMahon Comment, I referred to this Technical Q&A & got a TCP Dump - trace.pcap file. I opened this tcp dump with Wireshark & it does show me the bytes transferred between the ports of hardware & iPad.
Also in the terminal when I stopped the tcp dump capture I saw these messages:
12463 packets captured
36469 packets received by filter
0 packets dropped by kernel
Can someone point out the difference between packets captured & packets received by filter?
Note - The TCP dump attached is not for a failed scenario.
EDIT 1.1: Found the answer to difference between packets captured & packets received by filter here
TCP communication is not guaranteed to be reliable. The basic ack-syn paradigm can break, that is why you have re-transmission mechanism etc. Wireshark reports such problem in your packet capture session.
For using wireshark/tcpdump, you generally want to provide a filter, since the amount of traffic goes through the wire is overwhelming (ping, ntp, etc), you want to filter the capture using some basic filter to see the packets which is relevant to you. The packets which are filtered out is not captured, hence the numerical difference.
If it is a chunk of file went missing, I doubt issue is at TCP level. Most likely it is something higher level went wrong. I would run a fixed size file repeatedly through the channel till I can reliably reproduce the loss.
I am working on a project wherein I am implementing a MAC protocol. My first task is to implement priority based scheduling of packets. To be more specific, I want to schedule the transmission of packets such that certain types of packets which are more important than the others will be allotted higher priority than the rest of the packets.
I have been trying to establish this since quite sometime now and have used various approaches to achieve the same.
There are certain bits in the IP header which are allotted for setting the priority of the packets that are being transmitted. I have used socket programming to be able to achieve the same. I also tried it using raw sockets but it was causing some problems and was not working the way I wanted it to.
So I turned back to normal SOCK_DGRAM and SOCK_STREAM. But I am still facing some problems.
Can anyone help me regarding this?
I have a server application that receives some special TCP packet from a client and needs to react to it as soon as possible by sending an high-level ACK to the client (the TCP ACK won't suite my needs).
However, this server is really network intensive and sometimes the packet will take too long to be sent (like 200ms in a local network, when a simple server application can send it in less than 1ms).
Is there a way to mark this packet with a high-priority tag or something like that in Delphi? Or maybe with the Win32 API?
Thanks in advance.
EDIT
Thanks for all the answers so far. I'll add some details. My product has the following setup: there are several devices that are built upon vehicles with WIFI conectivity. When they arrive at the garage, those device connect to my server and start to transmit data.
Because of hardware limitations, I implemented a high-level ACK to make the device aware that the last packet arrived successfully (please, don't argue about this - the data may be broken even if I got a correct TCP ACK). However, if I use my server software, that communicates with a remote database, to issue this ACK, I get very long delay (>200ms). If I use an exclusive software to do this task, I get small latencies (<1ms). So, I was imagining if I could just tell Windows to send those special packets first, as it seems to me that this package is getting delayed so the database ones can get delivered.
That's the motivation behind my question.
EDIT 2
As requested: this is legacy software and I'm using the legacy dclsockets140.bpl package and Delphi 2010 (14.0.3593.25826).
IMO it is very difficult to realize this. there are a lot of equipment and software involved. first of all, if you communicate between 2 different OS's you got a latency. second, soft and hard firewalls, antiviruses, everything is filtering/delaying your package.
you can try also to 'hack' the system(this involve some very good knowledge on how the frames/segments are packed/send,flow control,congestion,etc), either by altering it from code, either by using some tools like http://half-open.com/ or others.
In short, passing MSG_OOB flag to the send function marks the data as "urgent". Detailed discussion about the OOB in the context of Windows Sockets implementation specifics is available here.
I am looking at developing my first multiplayer RTS game and I'm naturally going to be using UDP sockets for receiving/sending data.
One thing that I've been trying to figure out is how to protect these ports from being flooded by fake packets in a DoS attack. Normally a firewall would protect against flood attacks but I will need to allow packets on the ports that I'm using and will have to rely on my own software to reject bogus packets. What will stop people from sniffing my packets, observing any authentication or special structure I'm using and spamming me with similar packets? Source addresses can easily be changed to make detecting and banning offenders nearly impossible. Are there any widely accepted methods for protecting against these kind of attacks?
I know all about the differences between UDP and TCP and so please don't turn this into a lecture about that.
===================== EDIT =========================
I should add that I'm also trying to work out how to protect against someone 'hacking' the game and cheating by sending packets that I believe are coming from my game. Sequencing/sync numbers or id's could easily be faked. I could use an encryption but I am worried about how much this would slow the responses of my server and this wouldn't provide protection from DoS.
I know these are basic problems every programmer using a UDP socket must encounter, but for the life of me I cannot find any relevant documentation on methods for working around them!
Any direction would be appreciated!
The techniques you need would not be specific to UDP: you are looking for general message authentication to handle spoofing, rate throttling to handle DoS, and server-side state heuristics ("does this packet make sense?") to handle client hacks.
For handling DoS efficiently, you need layers of detection. First drop invalid source addresses without even looking at the contents. Put a session ID at the start of each packet with an ID that isn't assigned or doesn't match the right source. Next, keep track of the arrival rates per session. Start dropping from addresses that are coming in too fast. These techniques will block everything except someone who is able to sniff legitimate packets in real-time.
But a DoS attack based on real-time sniffing would be very rare and the rate of attack would be limited to the speed of a single source network. The only way to block packet sniffing is to use encryption and checksums, which is going to be a lot of work. Since this is your "first multiplayer RTS", I suggest doing everything short of encryption.
If you do decide to use encryption, AES-128 is relatively fast and very secure. Brian Gladman's reference Rijndael implementation is a good starting point if you really want to optimize, or there are plenty of AES libraries out there. Checksumming the clear-text data can be done with a simple CRC-16. But that's probably overkill for your likely attack vectors.
Most important of all: Never trust the client! Always keep track of everything server-side. If a packet arrives that seems bogus (like a unit moving Y units per second while it should only be able to mov X units per second) then simply drop the packet.
Also, if the number of packets per second grows to big, start dropping packets as well.
And don't use the UDP packets for "unimportant" things... In-game chat and similar things can go though normal TCP streams.
is it easy to write a script to test whether the network is ever down for the next 24 or 48 hours? I can use ssh to connect to a shell and come back 48 hours later to see if it is still connected to see if the network has ever been down, but can i do it programmatically easily?
The Internet (and your ethernet) is a packet-switched network, which makes the definition of 'down' difficult.
Some things are obvious; for example, if your ethernet card doesn't report a link, then it's down (unless you have redundant connections). But having a link doesn't mean its up.
The basic, 100 mile view of how the Internet works is that your computer splits the data it wants to send into ~1500-byte segments called packets. It then, pretty much blindly, sends them on their way, however your routing table says to. Then that machine repeats the process. Eventually, through many repetitions, it reaches the remote host.
So, you may be tempted to define up as the packet reached its destination. But, what happens if the packet gets corrupted, e.g., due to faulty hardware or interference? The next router will just discard it. Ok, that's fine, you may well want to consider that down. What if a router on the path is too busy, or the link it needs to be sent on is full? The packet will be dropped. You probably don't want to count that as down.
Packets routinely get lost; higher-level protocols (e.g., TCP) deal with it and retransmit the packet. In fact, TCP uses the packet drop on link full behavior to discover the link bandwidth.
You can monitor packet loss with a tool like ping, as the other answer states.
If your need is more adminstrative in nature, and using existing software is an option, you could try something like monit:
http://mmonit.com/monit/
Wikipedia has a list of similar software:
http://en.wikipedia.org/wiki/Comparison_of_network_monitoring_systems
You should consider also whether very short outages need to be detected. Obviously, a periodic reachability test cannot guarantee detecting outages shorter than the testing interval.
If you only care about whether there was an outage, not how many there were or how long they lasted, you should be able to automate your existing ssh technique using expect pretty easily.
http://expect.nist.gov/
Most platforms support a ping command that can be used to find out if a network path exists to an IP address somewhere "else". Where, exactly, to check depends on what you are really trying to answer.
If the goal is to discover the reliability of your first hop to your ISP's network, then pinging a router or their DNS regularly might be sufficient.
If your concern is really the connection to a single node (your mention of leaving an ssh session open implies this) then just pinging that node is probably the best idea. The ping command will usually fail in a way that a script can test if the connection times out.
Regardless, it is probably a good idea to check at a rate no faster than once a minute, and slower than that is probably sufficient.