Detect disconnected socket? - ios

I have client iOS app that connects to server.
It uses non blocking socket.
int fl;
fl=fcntl(s,F_GETFL,0);
fcntl(s,F_SETFL, fl | O_NONBLOCK);
int set = 1;
setsockopt(s, SOL_SOCKET, SO_NOSIGPIPE, (void *)&set, sizeof(int));
If there is no data then -1 returned by read
If it is disconnected then 0 returned by read
But it is not always true. Sometimes connection is lost but -1 is returned.
Is there something like eof to detect ?

0 is the EOF. If there is an error -1 is returned by read, and you should look at errno to see what it is. The only reliable way to detect a dropped connection in TCP is by writing to it at least twice.

Related

Dropped packets even if rte_eth_rx_burst do not return a full burst

I have a weird drop problem, and to understand my question the best way is to have a look at this simple snippet:
while( 1 )
{
if( config->running == false ) {
break;
}
num_of_pkt = rte_eth_rx_burst( config->port_id,
config->queue_idx,
buffers,
MAX_BURST_DEQ_SIZE);
if( unlikely( num_of_pkt == MAX_BURST_DEQ_SIZE ) ) {
rx_ring_full = true; //probably not the best name
}
if( likely( num_of_pkt > 0 ) )
{
pk_captured += num_of_pkt;
num_of_enq_pkt = rte_ring_sp_enqueue_bulk(config->incoming_pkts_ring,
(void*)buffers,
num_of_pkt,
&rx_ring_free_space);
//if num_of_enq_pkt == 0 free the mbufs..
}
}
This loop is retrieving packets from the device and pushing them into a queue for further processing by another lcore.
When I run a test with a Mellanox card sending 20M (20878300) packets at 2.5M p/s the loop seems to miss some packets and pk_captured is always like 19M or similar.
rx_ring_full is never true, which means that num_of_pkt is always < MAX_BURST_DEQ_SIZE, so according to the documentation I shall not have drops at HW level. Also, num_of_enq_pkt is never 0 which means that all the packets are enqueued.
Now, if from that snipped I remove the rte_ring_sp_enqueue_bulk call (and make sure to release all the mbufs) then pk_captured is always exactly equal to the amount of packets I've send to the NIC.
So it seems (but I cant deal with this idea) that rte_ring_sp_enqueue_bulk is somehow too slow and between one call to rte_eth_rx_burst and another some packets are dropped due to full ring on the NIC, but, why num_of_pkt (from rte_eth_rx_burst) is always smaller than MAX_BURST_DEQ_SIZE (much smaller) as if there was always sufficient room for the packets?
Note, MAX_BURST_DEQ_SIZE is 512.
edit 1:
Perhaps this information might help: the drops seems to be visible also by rte_eth_stats_get, or to be more correct, no drops are reported (imissed and ierrors are 0) but the value of ipackets equals my counter pk_captured (are the missing packets just disappeared??)
edit 2:
According to ethtools rx_crc_errors_phy is zero and all the packets are received at PHY level (rx_packets_phy is updated with the correct amount of transferred packets).
The value from rx_nombuf from rte_eth_stats seems to contain trash (this is a print from our test application):
OUT(4): Port 1 stats: ipkt:19439285,opkt:0,ierr:0,oerr:0,imiss:0, rxnobuf:2061021195718
For a transfer of 20M packets, as you can see rxnobuf is garbage OR it has a meaning which I do not understand. The log is generated by:
log("Port %"PRIu8" stats: ipkt:%"PRIu64",opkt:%"PRIu64",ierr:%"PRIu64",oerr:%"PRIu64",imiss:%"PRIu64", rxnobuf:%"PRIu64,
config->port_id,
stats.ipackets, stats.opackets,
stats.ierrors, stats.oerrors,
stats.imissed, stats.rx_nombuf);
where stats came from rte_eth_stats_get.
The packets are not generated on the fly but replayed from an existing PCAP.
edit 3
After the answer from Adriy (Thanks!) I've included the xstats output for the Mellanox card, while reproducing the same problem with a smaller set of packets I can see that rx_mbuf_allocation_errors get's updated, but it seems to contain trash:
OUT(4): rx_good_packets = 8094164
OUT(4): tx_good_packets = 0
OUT(4): rx_good_bytes = 4211543077
OUT(4): tx_good_bytes = 0
OUT(4): rx_missed_errors = 0
OUT(4): rx_errors = 0
OUT(4): tx_errors = 0
OUT(4): rx_mbuf_allocation_errors = 146536495542
Also those counters seems interesting:
OUT(4): tx_errors_phy = 0
OUT(4): rx_out_of_buffer = 257156
OUT(4): tx_packets_phy = 9373
OUT(4): rx_packets_phy = 8351320
Where rx_packets_phy is the exact amount of packets I've been sending, and summing up rx_out_of_buffer with rx_good_packets I get that exact amount. So it seems that the mbufs get depleted and some packets are dropped.
I made a tweak in the original code and now I'm making a copy of the mbuf from the RX ring using link and them releasing immediately the memory, further processing is done on the copy by another lcore. This do not fix the problem sadly, it turns out that to solve the problem I've to disable the packet processing and release also the packet copy (on the other lcore), which make no sense.
Well, will do a bit more investigation, but at least rx_mbuf_allocation_errors seems to need a fix here.
I guess, debugging rx_nombuf counter is a way to go. It might look like a garbage, but in fact this counter does not reflect the number of dropped packets (like ierrors or imissed do), but rather number of failed RX attempts.
Here is a snippet from MLX5 PMD:
uint16_t
mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
{
[...]
while (pkts_n) {
[...]
rep = rte_mbuf_raw_alloc(rxq->mp);
if (unlikely(rep == NULL)) {
++rxq->stats.rx_nombuf;
if (!pkt) {
/*
* no buffers before we even started,
* bail out silently.
*/
break;
So, the plausible scenario for the issue is as follow:
There is a packet in RX queue.
There are no buffers in the corresponding mempool.
The application polls for new packets, i.e. calls in a loop: num_of_pkt = rte_eth_rx_burst(...)
Each time we call rte_eth_rx_burst(), the rx_nombuf counter gets increased.
Please also have a look at rte_eth_xstats_get(). For MLX5 PMD there is a hardware rx_out_of_buffer counter, which might confirm this theory.
The solution to missing packets will be in changing ring API from bulk to burst. In dpdk there are 2 modes ring operation bulk and burst. In bulk dequeue mode if the requested elements are 32, and there are 31 elements the API returns 0.
I have faced similar issue too.

Why is my program reporting more captured packets than Wireshark?

I am writing a packet sniffer using pcap and visual studio. I've taken sample code for offline capturing and combined it with code that looks for an interface and captures packets live. This is what I have to display the packets information gotten from 1.
while (int returnValue = pcap_next_ex(pcap, &header, &data) >= 0)
{
// Print using printf. See printf reference:
// http://www.cplusplus.com/reference/clibrary/cstdio/printf/
// Show the packet number
printf("Packet # %i\n", ++packetCount);
// Show the size in bytes of the packet
printf("Packet size: %d bytes\n", header->len);
// Show a warning if the length captured is different
if (header->len != header->caplen)
printf("Warning! Capture size different than packet size: %ld bytes\n", header->len);
// Show Epoch Time
printf("Epoch Time: %d:%d seconds\n", header->ts.tv_sec, header->ts.tv_usec);
// loop through the packet and print it as hexidecimal representations of octets
// We also have a function that does this similarly below: PrintData()
for (u_int i=0; (i < header->caplen ) ; i++)
{
// Start printing on the next after every 16 octets
if ( (i % 16) == 0) printf("\n");
// Print each octet as hex (x), make sure there is always two characters (.2).
printf("%.2x ", data[i]);
}
// Add two lines between packets
printf("\n\n");
}
The problem I'm having is that if I run a WireShark live capture and also run my program, both capture packets live, but WireShark will show that it's capturing packet 20 and VS will show packetCount = 200.(Note: arbitrary numbers chosen to show Wireshark hasn't captured many packets, but VS is counting extremely fast.)
From what I understand, it seems the while loop is just running much faster than the packets are coming in and so it's just printing information from the last packet it received over and over again until a new one comes in.
How can I get VS to only capture packets as they come in?
I was unaware that Visual Studio included a packet sniffer; did you mean "how can I get my application, which I'm building with Visual Studio, to only capture packets as they come in?"
If that's what you meant, then that's what your code is doing.
Unfortunately, what it's not doing is "checking whether a packet has actually come in".
To quote the man page for pcap_next_ex() (yeah, UN*X, but this applies to Windows and WinPcap as well):
pcap_next_ex() returns 1 if the packet was read without problems,
0 if packets are being read from a live capture, and the timeout
expired, -1 if an error occurred while reading the packet, and -2
if packets are being read from a ``savefile'', and there are no
more packets to read from the savefile. If -1 is returned,
pcap_geterr() or pcap_perror() may be called with p as an argument
to fetch or display the error text.
Emphasis on "0 if packets are being read from a live capture, and the timeout expired"; 0 means that you did not get a packet.
Do not act as if a packet were captured if pcap_next_ex() returned 0, only do so if it returned 1:
while (int returnValue = pcap_next_ex(pcap, &header, &data) >= 0)
{
if (returnValue == 1) {
// Print using printf. See printf reference:
// http://www.cplusplus.com/reference/clibrary/cstdio/printf/
// Show the packet number
printf("Packet # %i\n", ++packetCount);
...
// Add two lines between packets
printf("\n\n");
}
}
SOLUTION: So apparently, adding parentheses around the argument fixes the issue.
while ( (( int returnValue = pcap_next_ex(pcap, &header, &data) ) >= 0 ) )

lua dissector for custom protocol

I have written several Lua Dissectors for custom protocols we use and they work fine. In order to spot problems with missing packets I need to check the custom protocol sequence numbers against older packets.
The IP source and Destination addresses are always the same for device A to device B.
Inside this packet we have one custom ID.
Each ID has a sequence number so device B can determine if a packet is missing. The sequence number increments by 256 and rolls over when it reaches 65k
I have tried using global dictionary but when you scroll up and down the trace the decoder is rerun and the values change.
a couple of lines below show where the information is stored.
ID = buffer(0,6):bitfield(12,12)
SeqNum = buffer(0,6):bitfield(32,16)
Ideally I would like to list in each decoded frame if the previous sequence number is more than 256 away and to produce a table lists all these bad frames.
Src IP; Dst IP; ID; Seq
1 10.12.1.2; 10.12.1.3; 10; 0
2 10.12.1.2; 10.12.1.3; 11; 0
3 10.12.1.2; 10.12.1.3; 12; 0
4 10.12.1.2; 10.12.1.3; 11; 255
5 10.12.1.2; 10.12.1.3; 12; 255
6 10.12.1.2; 10.12.1.3; 10; 511 Packet with seq 255 is missing
I have now managed to get the dissector to check the current packet against previous packets by using a global array, where I store specific information about each frame. In the current packet being dissected I recheck the most recent packet and work my way back to the start to find a suitable packet.
dict[pinfo.number] = {frame = pinfo.number, dID = ID, dSEQNUM = SeqNum}
local frameCount = 0
local frameFound = false
while frameFound == false do
if pinfo.number > frameCount then
frameCount = frameCount + 1
if dict[(pinfo.number - frameCount)] ~= nil then
if dict[(pinfo.number - frameCount)].dID == dict[pinfo.number].dID then
seq_difference = (dict[(pinfo.number)].dSEQNUM - dict[(pinfo.number - frameCount)].dSEQNUM)
if seq_difference > 256 then
pinfo.cols.info = string.format('ID-%d SeqNum-%d missing packet(s) %d last frame %d ', ID,SeqNum, seq_difference, dict[(pinfo.number - frameCount)].frame)
end
frameFound = true
end
end
else
frameFound = true
end
end
I'm not sure I see a question to answer? If you're asking "how can I avoid having to deal with the dissector being invoked multiple times and screwing up the previous decoding of the values" - the answer to that is using the pinfo.visited boolean. It will be false the first time a given packet is dissected, and true thereafter no matter how much clicking around the user does - until the file is reloaded or a new one loaded.
To handle the reloading/new-file case, you'd hook into the init() function call for your proto, by defining a function myproto.init() function, and in that you'd clear your entire array table.
Also, you might want to google for related questions/answer on ask.wireshark.org, as that site is more frequently used for wireshark Lua API questions. For example this question/answer is similar and related to your case.

Wireshark: Flag abbreviations and Exchange type

I was told to ask this here:
10:53:04.042608 IP 172.17.2.12.42654 > 172.17.2.6.6000: Flags [FPU], seq 3891587770, win 1024, urg 0, length 0
10:53:04.045939 IP 172.17.2.6.6000 > 172.17.2.12.42654: Flags [R.], seq 0, ack 3891587770, win 0, length 0
This states that the flags set are FPU and R. What flags do these stand for and what kind of exchange is this?
The flags are:
F - FIN, used to terminate an active TCP connection from one end.
P - PUSH, asks that any data the receiving end is buffering be sent to the receiving process.
U - URGENT, indicating that there is data referenced by the urgent "pointer."
R - RESET, indicating that a packet was received that was NOT part of an existing connection.
It looks like the first packet was manufactured, or possibly delayed. The argument for it being manufactured is the urgent flag being set, with no urgent data. If it was delayed, it indicates the normal end of a connection between .12 and .6 on port 6000, along with a request that the last of any pending data sent across the wire be flushed to the service on .6.
.6 has clearly forgotten about this connection, if it even existed. .6 is indicating that while it got the FIN packet, it believes that the connection that FIN packet refers to did not exist.
If .6 had a current matching connection, it would have replied with a FIN-ACK instead of RST, acknowledging the termination of the connection.

tracert command returns timed out

tracert returns requested time out. What I understand from this is the packets lost some where on the network.
Does it mean the issue is with the ISP or with the hosting provider or my windows system?
10 * * * Request timed out.
11 * * * Request timed out.
12 * * * Request timed out.
13 * * * Request timed out.
14 * * * Request timed out.
15 * * * Request timed out.
16 * * * Request timed out.
17 * * * Request timed out.
18 * * * Request timed out.
19 * * * Request timed out.
20 * * * Request timed out.
21 * * * Request timed out.
22 * * * Request timed out.
23 * * * Request timed out.
24 * * * Request timed out.
25 * * * Request timed out.
26 * * * Request timed out.
27 * * * Request timed out.
28 * * * Request timed out.
29 * * * Request timed out.
30 * * * Request timed out.
The first 9 were successful.
I can't see the first 9 hops but if they are all the same then you may have a firewall configuration issue that prevents the packets from either getting out or getting back.
Try again turning off your firewall (temporarily!). The other option is that your ISP may drop ICMP traffic as a matter of course, or only when they are busy with other traffic.
ICMP (the protocol used by traceroute) is of the lowest priority, and when higher priority traffic is ongoing the router may be configured to simply drop ICMP packets. There is also the possibility that the ISP drops all ICMP packets as a matter of security since many DOS (Denial of Service) attacks are based on probing done with ICMP packets.
Some routers view all pings as a Port-Scan, and block for that reason. (as the first step in any attack is determining which ports are open.) However, blocking ping packets / tracert packets, etc. is only partially effective at mitigating a Denial-of-service attack, as such an attack could use ANY PROTOCHOL it wanted (such as by using TCP or UDP packets, etc.) So long as there is an open port to receive the packet on the machine targeted for Denial-Of-Service. For example, if we wanted to target an http server, we only need use an intercepting proxy to repeatedly send a null TCP packet to the server on port 80 or port 8080, since we know that these are the two most common ports for http. Likewise, if the target machine is running an IRCd, we know the port is most likely 6667 (unless the server is using SSL), which would be the most common port for that kind of service. Therefore, dropping ping packets does not prevent a DdOS attack- it just makes that type of attack a bit more difficult.
This is what I found from the Wireshark documentation(I had the same problem):
"The tracert program provided with Windows does not allow one
to change the size of the ICMP message sent by tracert. So it won’t be
possible to use a Windows machine to generate ICMP messages that are large
enough to force IP fragmentation. However, you can use tracert to generate
small,fixed-length packets"
https://danielgraham.files.wordpress.com/2021/09/wireshark_ip_v8.1-2.pdf
use tracert -h 1
this will limit the number of times it tries a particular ip address to 1 try. h = hops. I had written a batch script a while back to scan my entire network to get a list of ips and computer networks and it would waste time on the fire wall that wouldnt answer and ip addresses that weren't assigned to any computers. Wicked annoying!!!!!! so I added the -h 1 to the script!! I runs through and makes a list in a text file. I hope to improve it in the future by running arp -a first to get a quck list of ips, then feeding that list into a script similar to this one. that way it doesn't waste time on unassigned IP's.
enter code here#echo off
enter code hereset trace=tracert
enter code hereset /a byte1=222
enter code hereset /a byte2=222
enter code hereset /a byte3=222
enter code hereset /a byte4=100
enter code hereset loop=0
enter code here:loop
enter code here#echo
enter code here%trace% %byte1%.%byte2%.%byte3%.%byte4%>>ips.txt
enter code hereset /a loop=%loop% + 1
enter code hereset /a byte4=%byte4% + 1
enter code here#echo %byte4%
enter code hereif %loop%==255 goto next
enter code heregoto loop
enter code here:next
enter code hereend
Your antivirus blocks the incoming packets , and in no case this option can be turned off because its the basic property of an antivirus i.e to block packets to prevent computer from normal as well as DOS (Denial of Service) attacks .

Resources