Can I remove a packet payload inside a .p4 program? - network-programming

I would like to know if it's possible to completely remove a packet payload from a packet inside a .p4 program or at least modify it to random data. The reason behind this is that I'm cloning a packet and sending it to a different host (monitor) and this host does not need the packets payload.

Depends on what are you trying to do. If you would like to remove the some kind of header then it's enough to call
hdr.random_header.setInvalid()
if you call that in Egress it should remove fields of the header from the packet.
If you have len fields in headers you might also use
truncate(new_size)
when you know the size of packet without payload. If you already know easier option please share it here.

Related

How does error handling work in SCTP Sockets API Extensions?

I have been trying to implement a wrapper library for the Linux interface to SCTP sockets, and I am not sure how to integrate the asynchronous style of errors (where they are delivered via events). All example code I have seen, if it deals with the errors at all, simply prints out the information related to the error when it is received, but inserting error-handling code there seems like it would be ineffective, because by that point all of the context related to the original message which was sent has been lost and only a 32-bit integer sinfo_context remains. It also seems that there is no way to directly tell when a given message has been acknowledged successfully by the remote peer, which would make it impossible to implement an approach which listens for errors after sending a message, because the context information for successfully-delivered messages could never be freed.
Is there a way to handle the errors related to a given sending operation as part of the call to a send function, or is there a different way to approach error handling for SCTP which does not lose the context of the error?
One solution which I have considered is using the SCTP_SENDER_DRY notification to tell when packets have been sent, however this requires sending only one packet at a time. Another idea is to use the peer's receiver window size together with the sinfo_cumtsn field of sctp_sndrcvinfo to calculate how much data has been acknowledged as fully received using the cumulative TSN, however there are a couple of disadvantages to this: first, it requires bookkeeping overhead to calculate a number of bytes received by the peer based on the cumulative TSN (especially if the peer's window size may change); second, it requires waiting until all earlier packets were received before reporting success, which seems to defeat the purpose of SCTP's multistreaming; and third, it seems like it would not work for unordered packets.

Missing bytes on IdUDPServer.OnRead event in buffer array - Delphi XE3

Can't seem to find anywhere informations about this, but, is TIdUDPServer.OnRead event passing everything that comes in to the AData array or not?
According to WireShark readings, I'm missing 42 bytes of data; While I should be getting 572 bytes of data on each reading, the AData size is always 530, and seems like always the same bytes are missing.
The device that sends data is broadcasting it, and I can get everything I need except for 2 bytes, which seems to be 2 of those that are missing.
Any hints on this one?
Edit:
I should mention that these are the very first 42 bytes; Everything afterwards is received fine;
The OnUDPRead event passes everything the socket receives from the OS. UDP operates on messages. Unlike TCP, a UDP read is an all-or-nothing operation, either a whole UDP message is read or an error occurs, there is no in-between.
If you are missing data, then either the OS is not providing it (such as if it belongs to the UDP and/or IP headers), or you are not reading data from the AData parameter correctly. If you think this is not the case, then you need to update your question to show your actual OnUDPRead handler code, an example WireShark dump showing the data being captured from the network, and the data that is making it to your OnUDPRead handler.
Update: The OS does not provide access to the packet headers (unless you are using a RAW socket, which TIdUDPServer does not use, but that is a whole other topic of discussion). The AData parameter of the OnUDPRead event provides only the application data portion of a packet, as that is what the OS provides. You cannot access the packet headers.
That being said, you can get the packet's source IP:Port, at least, via the ABinding.PeerIP and ABinding.PeerPort properties of the OnUDPRead event. However, there is no way to retrieve the other packet header values (nor should you ever need them in most situations), unless you sniff the network yourself, such as with a pcap library.

How is data divided into packets?

Hi sorry if this is a stupid question (I just started learning network programming), but I've been looking all over google about how files/data are divided into packets. I've read everywhere that somehow files are broken up into packets have headers/footers applied as they go through the OSI model and are sent through the wire where the recipient basically does the reverse and removes the headers.
My question is how exactly are files/data broken up into packets and how are they reassembled at the other end?
How does whatever doing the reassembling know when the last packet of the data has arrived and etc?
Is it possible to reassemble packets captured from another machine? And if so how?
(Also if it means anything I'm mostly interested in how this work for TCP type packets)
I also have some packets captured from an application on my computer through WireShark, they're labeled as TCP protocol, what I want to do is reassemble them back into the original data, but how can you tell which packets belong to which set of data?
Any pointers towards resources is much appreciated, thank you!
My question is how exactly are files/data broken up into packets
What's being sent over a network isn't necessarily a file. In the cases where it is a file, there are several different protocols that can send files, and the answer to the question depends on the protocol.
For FTP and HTTP, the entire contents of the file is probably being sent as a single data stream over TCP (preceded by headers in the case of HTTP, and just raw, over the connection, in the case of FTP).
For TCP, there's a "maximum segment size" negotiated by the client and server, based on factors such as the maximum packet size on the various networks between the server and client, and the file data is sent, sequentially, in chunks whose size is limited by the maximum packet size and the size of IP and TCP headers.
For remote file access protocols such as SMB, NFS, and AFP, what goes over the wire are "file read" and "file write" requests; the reply to a "file read" request includes some reply headers and, if the read is successful, the chunk of file data that the read request asked for, and a "file write" request includes some request headers and the chunk of file data being written. Those are not guaranteed to be an entire file, in order, but if the program reading or writing the file is reading or writing the entire file in sequential order, the entire file's data will be available. The packet sizes will depend on the size of the read reply/write request headers and on the read or write size being used; those packets might be broken into multiple TCP segments, based on the TCP "maximum segment size" and the size of the IP and TCP headers.
My question is how exactly are files/data broken up into packets
For FTP, the recipient of the data knows that there is no more data when the side of the TCP connection over which the data is being transmitted is closed.
For HTTP, the recipient of the data knows that there is no more data when the side of the TCP connection over which the data is being transmitted is closed or, if the connection is "persistent" (i.e., it remains open for more requests and replies), when the amount of data specified by the "Content-Size:" header, sent before the data, has been transmitted (or other similar mechanisms, such as the "last chunk" indication for chunked encoding).
For file access protocols, there's no real "we're at the end of data" indication; the closest approximation, for SMB, AFP, and NFSv4, is a "file close" operation.
Is it possible to reassemble packets captured from another machine? And if so how?
It depends on the protocol, but, for HTTP and SMB, if the capture has been read into Wireshark (and all the file data is in the capture!), you can use the "Export Objects" menu, and, for some protocols, you might also be able to use tcpflow.
My question is how exactly are files/data broken up into packets and how are they reassembled at the other end?
They are basically just chopped up. Each internet packet (with header info add) can only hold a few hundred bytes of actual data.
How does whatever doing the reassembling know when the last packet of the data has arrived and etc?
For a transfer the packets are numbered, so the receiving process knows how to put them together. If it loses a packet, it can request a resend.
Is it possible to reassemble packets captured from another machine? And if so how?
I don't understand the question. How would you get these packets unless you were a man-in-the-middle?
These answers are true for TCP packets.
First determine what size you want to transmit.
then put header, data and footer for each transmission.
See buffer length and data array should be divisible by number of packets without giving fractions.
Here header should contain frame number, time stamp, packet number
payload data
footer ---your company information.
prepare data fragments before sending

libtrace's function to calculate packet checksum

I am using libtrace to modify the payload of captured packet due to some research reason. In this case, I have to calculate the new checksum for the modified packet. My question is that is there an easy way to do this, for example, is there a function in libtrace can do this? Any comment is appreciate.
There's no API function in libtrace specifically for this at present, but there is code that generates correct IPv4, TCP and UDP checksums for packets inside of the tracereplay tool which you could use as the basis for writing your own functions to do it.
The code itself can be found in tools/tracereplay/tracereplay.c in the libtrace source. The libtrace source itself can be downloaded from here (in case you got libtrace via a packaging system).
There's also a mailing list for libtrace questions that is more likely to get prompt responses.

Strange rare out-of-order data received using Indy

We're having a bizarre problem with Indy10 where two large strings (a few hundred characters each) that we send out one after the other using TCP are appearing at the other end intertwined oddly. This happens extremely infrequently.
Each string is a complete XML message terminated with a LF and in general the READ process reads an entire XML message, returning when it sees the LF.
The call to actually send the message is protected by a critical section around the call to the IOHandler's writeln method and so it is not possible for two threads to send at the same time. (We're certain the critical section is implemented/working properly). This problem happens very rarely. The symptoms are odd...when we send string A followed by string B what we received at the other end (on the rare occasions where we have failure) is the trailing section of string A by itself (i.e., there's a LF at the end of it) followed by the leading section of string A and then the entire string B followed by a single LF. We've verified that the "timed out" property is not true after the partial read - we log that property after every read that returns content. Also, we know there are no embedded LF characters in the string, as we explicitly replace all non-alphanumeric characters in the string with spaces before appending the LF and sending it.
We have log mechanisms inside the critical sections on both the transmission and receiving ends and so we can see this behavior at the "wire".
We're completely baffled and wondering (although always the lowest possibility) whether there could be some low-level Indy issues that might cause this issue, e.g., buffers being sent in the wrong order....very hard to believe this could be the issue but we're grasping at straws.
Does anyone have any bright ideas?
You could try Wireshark to find out how the data is tranferred. This way you can find out whether the problem is in the server or in the client. Also remember to use TCP to get "guaranteed" valid data in right order.
Are you using TCP or UDP? If you are using UDP, it is possible (and expected) that the UDP packets can be received in a different order than they were transmitted due to the routing across the network. If this is the case, you'll need to add some sort of packet ID to each UDP packet so that the receiver can properly order the packets.
Do you have multiple threads reading from the same socket at the same time on the receiving end? Even just to query the Connected() status causes a read to occur. That could cause your multiple threads to read the inbound data and store it into the IOHandler.InputBuffer in random order if you are not careful.
Have you checked the Nagle settings of the IOHandler? We had a similar problem that we fixed by setting UseNagle to false. In our case sending and receiving large amounts of data in bursts was slow due to Nagle coalescing, so it's not quite the same as your situation.

Resources