I have a server running on a Windows 7 desktop PC, and a client running on a Windows XP Lenovo laptop.
The following keeps happening in a loop:
The client is broadcasting UDP packets containing some ID info.
The server gets the broadcast and replies with another UDP packet with some data inside.
I can see in Wireshark that the server is sending the proper data, but the RECVFROM function is returning some other data. After around 20-30 seconds the data is finally read correctly.
If I run both the server and client on the desktop it works fine. Any ideas?
Relevant piece of code:
do
{
result=recvfrom(_socket,buff,buffLen,0,(SOCKADDR*)&SenderAddr,&SenderAddrSize);
if(result != SOCKET_ERROR)
{
//small processing
.....
////
sendto(_socket,buff,16,0,(SOCKADDR*)&SenderAddr,sizeof(SenderAddr));
}
while(true)
Taking a bit of a guess here.
I can't imagine that your UDP packets wonder around somewhere for 20 seconds. After all RTT around the globe over the public Internet is usually 40 times less then that. So I think you just keep re-sending your data until you get the expected response.
If I am right with my assumption then what you see is a normal UDP packet loss. Is that laptop on a wireless link? Does the client app block on some input?
Run wireshark on the laptop too. Do you see the same number and sequence of packets as on the sender (server) side? If so, then the client does not consume those packets fast enough. If you actually see the packets back on the client with that 20 second delay then you really have to describe more of your setup to explain the magic :)
Thanks for your answers..
The problem was that the sendto method was behaving differently on laptop comparing to the desktop.
On laptot sendto was sending 3 UDP packets, on the destop only one.
My application was expecting only one packet,so the processing time + multiply by 3 gives the impression of delay.
I don't understand why this happens but this was the problem.
Related
I wrote an MQTT client program, which runs at a computer (computer 1). The MQTT client program connects to an MQTT Broker with QoS=1 and publishes data to Broker periodically. I subscribe the Broker (Qos=1) from another computer (computer 2), using Mosquito utility. I found the data published to Broker will be delay delivered to publisher about 3 seconds. The delayed time is too long. I checked the codes and found the 3-second-delay time is from read_packet() which is to read back acknowledge from Broker. Why is there such long delay time? How can I figure it out? The Broker (MQTT server) is managed by my coworker. If the Broker is the cause, I can request them to help. But I need to know what could be the trouble source, so that I can check with them.
I can confirm the delay occurring at the time of reading back acknowledge from Broker by watching the debugging message from MQTT client program at computer 1. For Qos = 1, the client must read back acknowledge after sending (publishing) packets. I found 3-second delay time between sending packet and reading back acknowledge. Surely, I also found the delay at display of Mosquito_sub utility.
Assuming near instant network comms and nothing else strange going on the fact that you have recreated the problem with mosquitto_sub then this points to the MQTT broker being the source of the problem.
Without knowing what broker you are using and how heavily it is loaded it's hard to say more but you should look at the broker logs.
I have an iOS application that remotely connects to 3 sockets(of some hardware). Each Socket has its own priority. One channel is only used for transferring messages between iPad App & hardware, one for Tx/Rx Images, another one for Tx/Rx Videos. I had implemented all the three sockets using GCDAsyncSocket API & things worked fine while using MSGSocket/ImageSocket (OR) MSGSocket/VideoSocket, but when I start using the VideoSocket/ImageSocket/MSGSocket simultaneously this is where things go a little haywire. I Lose Packets of Data.{Actually a chunk of file goes missing :-(} I went through the API & found some bug in the API: Unable to complete Read Stream which I assumed could be a cause of problem. Hence, I Switched to threads & implemented the same using NSThreads/CFSocket API.
I changed only the implementation for ImageSocket/VideoSocket code using NSThreads/CFSocket API & here is the implementation of the same dropbox-ed. I'm just unable to understand as to where the things are going wrong whether it is at iOS App end or at the Server side. In my understanding there shall be no loss of packets in TCP Communication.
Is there a way to Debug This issue. Also I request to go through the code & let me know if any thing is wrong(I know this can be too much that I'm asking for but I need some assurance as to the code implementation is correct). Any help to resolve this issue will be highly appreciated.
EDIT 1: After #JoeMcMahon Comment, I referred to this Technical Q&A & got a TCP Dump - trace.pcap file. I opened this tcp dump with Wireshark & it does show me the bytes transferred between the ports of hardware & iPad.
Also in the terminal when I stopped the tcp dump capture I saw these messages:
12463 packets captured
36469 packets received by filter
0 packets dropped by kernel
Can someone point out the difference between packets captured & packets received by filter?
Note - The TCP dump attached is not for a failed scenario.
EDIT 1.1: Found the answer to difference between packets captured & packets received by filter here
TCP communication is not guaranteed to be reliable. The basic ack-syn paradigm can break, that is why you have re-transmission mechanism etc. Wireshark reports such problem in your packet capture session.
For using wireshark/tcpdump, you generally want to provide a filter, since the amount of traffic goes through the wire is overwhelming (ping, ntp, etc), you want to filter the capture using some basic filter to see the packets which is relevant to you. The packets which are filtered out is not captured, hence the numerical difference.
If it is a chunk of file went missing, I doubt issue is at TCP level. Most likely it is something higher level went wrong. I would run a fixed size file repeatedly through the channel till I can reliably reproduce the loss.
I'm want to design a ruby / rails solution to send out to several listening sockets on a local lan at the exact same time. I want the receiving servers to receive the message at exact same time / or millisecond second level.
What is the best strategy that I can use that will effectively allow the receiving socket to receive it at the exact same time. Naturally my requirements are extremely time sensitive.
I'm basing some of my research / design on the two following articles:
http://onestepback.org/index.cgi/Tech/Ruby/MulticastingInRuby.red
http://www.tutorialspoint.com/ruby/ruby_socket_programming.htm
Now currently I'm working on a TCP solution and not UDP because of it's guaranteed delivery. Also, was going to stand up ready connected connections to all outbound ports. Then iterate over each connection and send the minimal packet of data.
Recently, I'm looking at multicasting now and possibly reverting back to a UDP approach with a return a passive response, but ensure the message was sent back either via UDP / TCP.
Note - The new guy syndrome here with sockets.
Is it better to just use UDP and send a broad packet spam to the entire subnet without guaranteed immediate delivery?
Is this really a time sensitive component? If it's truly down to the microsecond level then you may want to ensure its implemented close to native functions on the hardware. That being said a TCP ACK should be faster than a UDP send and software response.
I have got a Wavecom Supreme GSM modem. I wrote a simple application that communicates with the modem and reads text messages it receives.
My application queries the modem for information about the number of messages it stores in its memory and if the number is greater than 0, it reads the messages deleting them from the modem's memory. I query the modem this way every few seconds.
Unfortunately, however, the modem hangs every few minutes and does not respond to any AT commands I send to it. The only solution I came up with to unlock the communication is to close the serial port and open it anew. Then everything is fine for next few minutes after which serial port has to be reopened again when the modem hangs.
It can of course be the modem's fault, but I'm wondering whether the way I communicate with it is OK.
Firs of all, I open the modem's serial port for asynchronous operations. Then I set the DCB structure as follows:
GetCommState(PortHandle, DCB);
DCB.BaudRate := 115200;
DCB.ByteSize := 8;
DCB.Parity := NOPARITY;
DCB.StopBits := ONESTOPBIT;
DCB.EvtChar := #13;
SetCommState(PortHandle, DCB);
SetCommMask(PortHandle, EV_RXFLAG);
//the modem does not respond without setting these:
EscapeCommFunction(PortHandle, SETDTR);
EscapeCommFunction(PortHandle, SETRTS);
And then all I do is send AT commands and wait for modem's response. I do not use any flow control. Everything I do is wait for comm event, read the data from the serial port's queue when the modem responds and write some AT commands followed by the #13 character to query the modem for messages.
I think I may have set the DCB structure improperly, for as you can see, I do not modify some of its fields. Because my knowledge on serial port is not enough, I do not know how to set the RTS and DTR control (enabled/disabled/handshake/toggle).
If you noticed some obvious mistakes in this way of handling modem, I would be grateful if you explained me what I had done wrong. If everything is fine, on the other hand, maybe you have got an idea why the modem hangs?
Thank you in advance.
Typically the DCB settings are the first things you should verify. The modem documentation should mention the serial port settings. If not search online with the model number of your modem.
Make sure the flow control in Device Manager, the modem, and the program are all set the same. I don't know Delphi, but I think the DCB should have a "Flags" field. Try setting it to 24 for the hardware flow control.
I have created a raw socket which takes all IPv4 packets from data link layer (with data link layer header removed). And for reading the packets I use recvfrom.
My doubt is:
Suppose due to some scheduling done by OS, my process was asleep for 1 sec. When it woke up,it did recvfrom (with number of bytes to be received say 1000) on this raw socket (with the intention of receiving only one IPv4 packet and say the size of this packet is 380 bytes). And suppose many network applications were also running simultaneously during this time, so all IPv4 packets must have been queued in the receive buffer of this socket. So now recvfrom will return all 1000 bytes (with other IPv4 packets from 381th byte onwards) bcoz it has enough data in its buffer to return. Although my program was meant to understand only one IPv4 packet
So how to prevent this thing? Should I read byte by byte and parse each byte but it is very inefficient.
IIRC, recvfrom() will only return one packet at a time, even if there are more in the queue.
Raw sockets operate at the packet layer, there is no concept of data streams.
You might be interested in recvmmsg() if you want to read multiple packets in one system call. Recent Linux kernels only, there is no equivalent send side implementation.