I'm trying to understand the purpose of the csum_start and csum_offset fields in struct sk_buff.
Googling for them, I came across the following definition:
csum_start is the offset from the address of skb->head to the address of the checksum field.
csum_offset is the offset from the beginning of the address of checksum to the end.
When are these fields actually used?
If the checksum is offloaded to a device driver via NETIF_F_HW_CSUM, how are the aforementioned values to be used/interpreted in this context?
Any insight on the above is highly appreciated!
If a device's feature is set to NETIF_F_HW_CSUM, then the network stack does not compute the transport checksum on the transmit path. Instead, it tells the device to compute the checksum by setting ip_summed to CHECKSUM_PARTIAL.
The device shall use csum_start (or skb_checksum_start_offset(skb) occasionally) as the starting position and compute the checksum till the end of the packet (len field in socket buffer). The computed checksum is stored at csum_offset from csum_start.
Related
Well, even Embarcadero states that it is not guaranteed to return accurate result of the bytes ready to read in the socket buffer, but if you look at it, when you place -1 at Socket.ReceiveBuf (this is what ReceiveLength wraps) it calls ioctlsocket with FIONREAD to determine the amount of data pending in the network's input buffer that can be read from socket s.
so, how is it not safe or bad ?
e.g: ioctlsocket(Socket.SocketHandle, FIONREAD, Longint(i));
The documentation you mention specifically says (emphasis mine)
Note: ReceiveLength is not guaranteed to be accurate for streaming socket connections.
This means that the length is not known ahead of time because it's being supplied by a stream of data. Obviously, if you don't know how big the data is that's being sent ahead of time, you can't properly set the length the client should expect.
Consider it like generic code to copy a file. If you don't know ahead of time how big the file is you'll be copying, you can't predict how many bytes you'll be copying. In the case of the socket, the stream size that's supplying the socket isn't known in advance (for instance, for data being generated real-time and sent), so there's no way to inform the client socket how much to expect.
I am currently looking at checksums but am having trouble fully understanding how they work.
FYI, I have been looking at UDP checksums and Internet checksums. I have learned that UDP at the sender side performs 1s complement, however I am unclear as to what 1s complement is.
I have a rough idea of 1s complement being something to do with 'reversing' the values of all 1s and 0s, so that a 1 becomes a 0 and a 0 becomes a 1, but I do not know why this is done in the first place.
Could somebody kindly provide some information about checksums in general?
Thank you.
Checksum is mostly the hash (a one way encryption function) of some value to make sure that the data is consistent when it gets to the other end. The checksum is taken before the data is sent, then when the data is received at the other, the checksum of the same value is taken again, and matched with the checksum from the sender, if they are the same, then the data is in good state, else we know something is wrong.
Fairly simplified explanation.
A checksum is just an integer which is calculated by these rules:
Sum everything in the packet except the checksum (I call it sum). Save in the checksum: -sum.
When the packet is arriving, sum everything that is in the packet. If the sum gives 0, then the packet is valid.
I want to send files(text or binary) through winsock,I have a buffer with 32768 byte size, In the other side the buffer size is same,But when the packet size <32768 then i don't know how determine the end of packet in buffer,Also with binary file it seems mark the end of packet with a unique character is not possible,Any solution there?
thx
With fixed-size "packets," we would usually that every packet except the last will be completely full of valid data. Only the last one will be "partial," and if the recipient knows how many bytes to expect (because, using Davita's suggestion, the sender told it the file size in advance), then that's no problem. The recipient can simply ignore the remainder of the last packet.
But your further description makes it sound like there may be multiple partially full packets associated with a single file transmission. There is a similarly easy solution to that: Prefix each packet with the number of valid bytes.
You later mention TCustomWinSocket.ReceiveText, and you wonder how it knows how much text to read, and then you quote the answer, which is that it calls ReceiveBuf(Pointer(nul)^, -1)) to set the length of the result buffer before filling it. Perhaps you just didn't understand what that code is doing. It's easier to understand if you look at that same code in another context, the ReceiveLength method. It makes that same call to ReceiveBuf, indicating that when you pass -1 to ReceiveBuf, it returns the number of bytes it received.
In order for that to work for your purposes, you cannot send fixed-size packets. If you always send 32KB packets, and just pad the end with zeroes, then ReceiveLength will always return 32768, and you'll have to combine Davita's and my solutions of sending file and packet lengths along with the payload. But if you ensure that every byte in your packet is always valid, then the recipient can know how much to save based on the size of the packet.
One way or another, you need to make sure the sender provides the recipient with the information it needs to do its job. If the sender sends garbage without giving the recipient a way to distinguish garbage from valid data, then you're stuck.
Well, you can always send file size before you start file transfer, so you'll know when to stop writing to file.
I already read the datasheet and google but I still don't understand something.
In my case, I set PIN RC6 of a PIC18F26K20 in INPUT mode:
TRISCbits.TRISC6 = 1;
Then I read the value with PORT and LATCH and I have different value!
v1 = LATCbits.LATC6;
v2 = PORTCbits.RC6;
v1 gives me 0 where v2 gives 1.
Is it normal?
In which case we have to use PORT and in which case LATCH?
The latch is the output latch onto which values are written. The port is the voltage at the actual pin.
There are a few situations where they can be different. The one that I've encountered most frequently is if you have a pin (accidentally) shorted to ground. If you set the latch high, the latch will read high, but the port will read low because the voltage on the pin is still approximately ground.
Another situation leading to what you've described is when the port pin hasn't been configured correctly. I (and everyone I work with) have spent many hours trying to figure out why our PIC isn't working to expectations, to eventually find out that we glossed over turning off the analog modules, for instance. Make sure you go over the section I/O Ports -> PORT?, TRIS?, and LAT? registers in the datasheet. You can get more info in the Microchip wiki page which explains about reading the wrong value immediately after you write an output on a pin connected to a capacitive load.
That wiki page also explains:
A read of the port latch register returns the settings of the output drivers, whilst a read of the port register returns the logic levels seen on the pins.
Also, here's a snippet from the I/O Ports section on the 18F14K50 (which ought to be the same as the rest of the 18F series):
Each port has three registers for its
operation. These registers are:
TRIS register (data direction register)
PORT register (reads the levels on the pins of the device)
LAT register (output latch)
So in most situations, you will write to the latch and read from the port.
I'll adapt my answer from Electrical Engineering.
Let's use the picture from manual:
When you write a bit in a I/O pin, you're storing this bit from Data Bus to the Data Register (D-FlipFlop). If TRISx of this bit is 0, so data from Q of the Data Register will be in the I/O pin. Write in LATx or PORTx is the same. See below in red:
On the other hand, read from LATx is different of read from PORTx.
When you're reading from LATx, you're reading what is in the Data Register (D-FlipFlop). See picture below in green:
And when you read from PORTx, you're reading the actual I/O pin value. See below in blue:
PIC uses read-modify-write to write operations and this can be a problem, so they use this shadow register to avoid it.
Here's a useful summary from the datasheet.
11.2.3 LAT Registers
The LATx register associated with an I/O pin eliminates the problems that could occur with
read-modify-write instructions. A read of the LATx register returns the values held in the port
output latches, instead of the values on the I/O pins. A read-modify-write operation on the LAT
register, associated with an I/O port, avoids the possibility of writing the input pin values into the
port latches. A write to the LATx register has the same effect as a write to the PORTx register.
The differences between the PORT and LAT registers can be summarized as follows:
A write to the PORTx register
writes the data value to the port
latch.
A write to the LATx
register writes the data value to the
port latch.
A read of the PORTx
register reads the data value on the
I/O pin.
A read of the LATx
register reads the data value held in
the port latch.
Yes, it's normal to read PORTx and LATx and occasionally find they have different values.
When you want to read whether some external hardware is driving a pin high or low, you must set the pin to input mode (with TRIS or the DIR register), and you must read PORTx. That read tells you if the actual voltage at the pin is high or low.
When you want to drive a pin high or low, you must set the pin to output (with TRIS or the DIR register); you should write the bit to the LATx register.
(Writing that bit to the PORTx register may seem to do the right thing: that pin will -- eventually -- go high or low as commanded. But there are many cases -- such as when some other pin on that port is connected to an open-collector bus -- that writing to one bit of the the PORTx register will mess up the state of the other pins on that port, leading to difficult-to-debug problems).
Open Circuits: read before write
My recommendation is to regard the PORT values as read-only. The LAT values may be read or written, but the value read will be the last value written, not the input value of the pin.
On older PICs, the LATx values didn't exist; the only way to write to a port was via the PORTx registers. Curiously, some of the really old PICs, back from the General Instruments (pre-Microchip) days, supported LATx, but Microchip didn't add that feature until the PIC18x line.
A write to the PORTx register writes the data value to the port latch.
A write to the LATx register writes the data value to the port latch.
A read of the PORTx register reads the data value on the I/O pin.
A read of the LATx register reads the data value on the port latch.
Use LATx: to write to an output pin
Use PORTx: to read an input pin
For all PICs with LATx registers, all INPUT must be from PORTx and all OUTPUT should be to LATx, which totally avoids the problem of flipping bits when you write to a single bit of the port.
I recently experienced that writing on PORTx Ri (e.g. PORTC RC1) of PIC18F14K50 is ineffective when another PORTx Rj (e.g. PORTC RC0) was already set.
I observed a peek in the oscilloscope on PORTx Ri but I was unable to sustain the output.
This issue has vanished as soon as I was writing on LATx.
LATx writing looks mandatory on PIC18 and PORTx writing prohibited.
It is always recommended to write to LAT, read from PORT, the reason is when the port is used as output, bit operation of PORT will do read modify write instruction.
Read modify write Instruction have some pitfalls, based on the output capacitance (rise time of the port pins) which may set the port pins to wrong value, when two consecutive READ modify WRITE instruction is executed.
So always write to LAT and read from PORT (input pins)
I need to calculate total data transfer while transferring a fixed size data from client to server in TCP/IP. It includes connecting to the server, sending request,header, receiving response, receiving data etc.
More precisely, how to get total data transfer while using POST and GET method?
Is there any formula for that? Even a theoretical one will do fine (not considering packet loss or connection retries etc)
FYI I tried RFC2616 and RFC1180. But those are going over my head.
Any suggestion?
Thanks in advance.
You can't know the total transfer size in advance, even ignoring retransmits. There are several things that will stop you:
TCP options are negotiated between the hosts when the connection is established. Some options (e.g., timestamp) add additional data to the TCP header
"total data transfer size" is not clear. Ethernet, for example, adds quite a few more bits on top of whatever IP used. 802.11 (wireless) will add even more. So do HDLC or PPP going over a T1. Don't even think about frame relay. Some links may use compression (which will reduce the total size). The total size depends on where you measure it, even for a single packet.
Assuming you're just interested in the total octet size at layer 2, and you know the TCP options that will be negotiated in advance, you still can't know the path MTU. Which may change, even while the connection is in progress. Or if you're not doing path MTU discovery (which would be wierd), then the packet may get fragmented somewhere, and the remote end will see a different amount of data transfer than you.
I'm not sure why you need to know this, but I suggest that:
If you just want an estimate, watch a typical connection in Wireshark. Calculate the percent overhead (vs. the size of data you gave to TCP, and received from TCP). Use that number to estimate: it will be close enough, except in pathological situations.
If you need to know for sure how much data your end saw transmitted and received, use libpcap to capture the packet stream and check.
i'd say on average that request and response have about 8 lines of headers each and about 30 chars per line. Then allow for the size increase of converting any uploaded binary to Base64.
You didn't say if you also want to count TCP packet headers, in which case you could assume an MTU of about 1500 so add 16 bytes (tcp header) per 1500 data bytes
Finally, you could always setup a packet sniffer and count actual bytes for a sample of data.
oh yeah, and you may need to allow for deflate/gzip encoding as well.