Problem
I need to modify a .pcap file captured over a timespan of 5 minutes such that it simulates a .pcap file captured over a timespan of 20 minutes. The problem is that I don't know how to do this.
Example
To illustrate the problem, suppose I have a .pcap file with 4 captured packets p1-p4 and t as a start time such that:
p1 is sent at t+ 0 minutes
p2 is sent at t+ 1 minutes
p3 is sent at t+ 2 minutes
p4 is sent at t+ 3 minutes
I want my resulting .pcap file to contain the same four packets but with the timestamps scaled (from 5 minutes to 20 minutes) such that they represent the following:
p1 is sent at t+ 0 minutes
p2 is sent at t+ 4 minutes
p3 is sent at t+ 8 minutes
p4 is sent at t+ 12 minutes
Tried solution(s)
editcap, however the only option I could find here is to adjust all timestamps with a set value using the -t option.
Background
I am using tcpreplay to replay a scenario in which a user browsers a webpage. I simultaniously inject some packets which are dependent on the .pcap file I replayed, i.e. the packets are injected by live monitoring of the replayed traffic and subsequently adjusting the packets it sends out. This entire traffic trace - i.e. both the replayed traffic and the injected packets - are captured using tcpdump. As there are a lot of large .pcap files I want to replay, I use the tcpreplay --multiplier option to speed up the process. However, this means the final capture is a compressed version of the original .pcap file. I would like to 'stretch' the newly created .pcap to be the same size as the original.
This can be accomplished with Wireshark using its "Time Shift" feature.
Assuming the timestamp for packet 1 is 2017-08-17 12:00:00.000000, select packet 1 then choose "Edit -> Time Shift..." and set the time for packet 1 to 2017-08-17 12:00:00.000000 (i.e., don't change this one). Click the box next to "...then set packet" and enter 2 for the packet number and 2017-08-17 12:04:00.000000 as the timestamp. You'll notice that it also indicates, "and extrapolate the time for all other packets", which is what you want. Hit Apply.
At this point, the timestamps should be adjusted to what you want, although the sub-second component might not end up being exactly the same for all packets and for some reason even packet 1's sub-second component is not exactly what was originally specified. If you really want to retain the original sub-second component, then you'll have to adjust one packet at a time. Considering that there are only 4 packets to adjust, this should be feasible. I might suggest filing a Wireshark bug report for the erroneous sub-second adjustment though.
Related
I have an IoT device, which communicates with a cloud server via UDP. The device receives a command to turn on/off every couple of seconds based on a cloud schedule.
I believe the chip inside the device is similar to an arduino pro mini. It has an external serial to wifi bridge which "opens" the UDP connection to the server.
Commands from server:
CMD22246A00M10C239S004!9S1$
CMD22246A00M10C239S280!WM0$
CMD22246A00M10C075S960!X2I$
CMD22246A00M10C239S520!ME5$
CMD22246A00M10C075S811!EPJ$
I will explain the data a bit in case that helps.
Time in these packets is 22:46
The first 2 stands for Wednesday (0 being Monday)
A00 basically means turn off (supply 0 amps) - This changes to A10 when it is allowed to turn on
M10 is the maximum configured amps the devices should be allowed to pass through
CXXX I have no idea about
SXXX I have no idea about
And the 3 alphanumerics between the ! and $ seem to be a checksum. The letters are always uppercase.
The device reports data back to the cloud similarly with a 3 alphanumeric checksum at the end
I have tried "injecting" command data into the device via a separate UDP server but they are all have no effect unless I replay valid ones from the server.
I have tried various online tools and checksum/crc calculators but cannot seem to find any matches.
Thanks in advance.
Update
I have just started to notice that similar packets have a very similar "checksum" at the end. Here is a link to all my data from every Wednesday at 23:46, sorted alphabetically which gives the best string matches when starting left. I have started to notice that data that is "+1" to its neighbour, might have the checksum be +1 in the first character of the checksum.
Full data set here: https://pastebin.com/n6LgrDfh
Same data but split with symbols removed: https://pastebin.com/Q8q4ANEE
I have split these examples and removed the symbols for easier reading:
CMD22346A10M10 C075 S274 FZD
CMD22346A10M10 C075 S275 EZD
CMD22346A10M10 C075 S276 DZD
CMD22346A10M10 C075 S277 CZD
CMD22346A10M10 C073 S515 P60
CMD22346A10M10 C073 S516 Q60
CMD22346A10M10 C073 S517 J60
Update 2
There are never any letter O in the check characters.
I am looking for a solution because the sth-channel is full.
I am troubled with calculating the appropriate capacity of channel capacity.
This document has the following description.
In order to calculate the appropriate capacity, just have in consideration the following parameters:
・The amount of events to be put into the channel by the sources per unit time (let's say 1 minute).
・The amount of events to be gotten from the channel by the sinks per unit time.
・An estimation of the amount of events that could not be processed per unit time, and thus to be reinjected into the channel (see next section).
How can I check the values of these parameters?
How can I check the values of these parameters?
You can't just check these parameters. They depend on your application.
What they are saying is that you should have a size which is large enough so the generator doesn't get stuck. This may not be possible in your application.
Say your generator receives one event per second and it takes 2 seconds for a receiver to manage that event. Now lets assume you have 3 receivers. In 1 second, you can manage to process 0.5 events per receiver. You have 3 receivers, so your receivers, together, are capable of processing 0.5 × 3 = 1.5 events, which is more than what you get as input. Your capacity can be 1 or 2, using 2 will greatly increase your chances that you do not get blocked.
Let's review another example:
Your generator wants to pushes 1,000 events per second
Your receivers take 3 seconds to process one event
You would need 1,000 x 3 = 3,000 receivers (3,000 goroutines that can run at full speed in parallel...)
In this example, the total number of receivers is so large that you have to either break up your code to work on multiple computers or optimize your receiver code so it can process the data in an amount of time that makes sense. Say you have 50 processors, your receivers will get 1,000 events per second, all 50 can run at full speed, you need one receiver to do its work in:
50 / 1000 = 0.05 seconds
Now let's assume that in most cases your goroutines take 0.02 but once in a while one will take 1 second. That means your goroutines can get a little behind. In that case your capacity (so the generator doesn't get blocked) should be a little over 1,000. Again, it will depend on how many of the routines get slowed down, etc. In this last example, a run is 0.02 seconds so to process 1,000 events it usually takes 0.02 seconds. If you can send those 1,000 event over the 1 second period, you may not even need the 50 goroutines and could have a smaller capacity. On the other hand, if you have big bursts where you may end up sending many (say 500) events all at ones, then more goroutines and a larger capacity is important to not get blocked.
I download inside tcpdump data of week5-monday of darpa dataset(link)
and download attack list on week 4 and 5 of darpa site(link)
(attack list say on time 04/06/1999 08:11:15 duration 00:00:10 on des
ip 172.016.112.050 is tcpreset attack)
I want to find tcpreset attack packet on tcpdum so I open tcpdump with wireshark and filter packets that times between 8:11:15 and 8:11:25 (frame.time>apr 6,1999 8:11:15 and frame.time>apr 6,1999 8:11:25)
problem: I cant find packet with 172.016.11.050 des ip on result!!!
Try giving a minute gap.
During the 1999 evaluations, a 1 minute gap was given to give chance for IDSs to detect attacks during week 2 for labeled attacks.
I've read advice in many places to the effect that sending a lot of small packets will lead to network congestion. I've even experienced this with a recent multi-threaded tcp app I wrote. However, I don't know if I understand the exact mechanism by which this occurs.
My initial guess is that if the MTU of the physical transmission media is fixed, and you send a bunch of small packets, then each packet may potential take up an entire transmission frame on the physical media.
For example, my understanding is that even though Ethernet supports variable frames most equipment uses a fixed Ethernet frame of 1500 bytes. At 100 Mbit, a 1500 byte frame "goes by" on the wire every 0.12 milliseconds. If I transmit a 1 byte message ( plus tcp & ip headers ) every 0.12 milliseconds I will effectively saturate the 100Mb Ethernet connection with 8333 bytes of user data.
Is this a correct understanding of how tinygrams cause network congestion?
Do I have all my terminology correct?
In wired ethernet at least, there is no "synchronous clock" that times the beginning of every frame. There is a minimum frame size, but it's more like 64 bytes instead of 1500. There are also minimum gaps between frames, but that might only apply to shared-access networks (ATM and modern ethernet is switched, not shared-access). It is the maximum size that is limited to 1500 bytes on virtually all ethernet equipment.
But the smaller your packets get, the higher the ratio of framing headers to data. Eventually you are spending 40-50 bytes of overhead for a single byte. And more for its acknowledgement.
If you could just hold for a moment and collect another byte to send in that packet, you have doubled your network efficiency. (this is the reason for Nagle's Algorithm)
There is a tradeoff on a channel with errors, because the longer frame you send, the more likely it experience an error and will have to be retransmitted. Newer wireless standards load up the frame with forward error correction bits to avoid retransmissions.
The classic example of "tinygrams" is 10,000 users all sitting on a campus network, typing into their terminal session. Every keystroke produces a single packet (and acknowledgement).... At a typing rate of 4 keystrokes per second, That's 80,000 packets per second just to move 40 kbytes per second. On a "classic" 10mbit shared-medium ethernet, this is impossible to achive, because you can only send 27k minimum sized packets in one second - excluding the effect of collisions:
96 bits inter-frame gap
+ 64 bits preamble
+ 112 bits ethernet header
+ 32 bits trailer
-----------------------------
= 304 bits overhead per ethernet frame.
+ 8 bits of data (this doesn't even include IP or TCP headers!!!)
----------------------------
= 368 bits per tinygram
10000000 bits/s ÷ 368 bits/packet = 27172 Packets/second.
Perhaps a better way to state this is that an ethernet that is maxed out moving tinygrams can only move 216kbits/s across a 10mbit/s medium for an efficiency of 2.16%
A TCP Packet transmitted over a link will have something like 40 bytes of header information. Therefore If you break a transmission into 100 1 byte packets, each packet sent will have 40 bytes, so about 98% of the resources used for transmission are overhead. If instead, you send it as one 100 byte packet, the total transmitted data is only 140 bytes, so only 28% overhead. In both cases you've transmitted 100 bytes of payload over the network, but in one you used 140 bytes of network resources to accomplish it, and in the other you've used 4000 bytes. In addition, it take more resources on the intermediate routers to correctly route 100 41 byte payloads, than 1 40 byte payloads. Routing 1 byte packets is pretty much the worst case scenerio for the routers performancewise, so they will generally exhibit their worst case performance under this situation.
In addition, especially with TCP, as performance degrades due to small packets, the machines can try do do things to compensate (like retransmit) that will actually make things worse, hence the use of Nagles algorithm to try to avoid this.
BDK has about half the answer (+1 for him). A large part of the problem is that every message comes with 40 bytes of overhead. Its actually a little worse than that though.
Another issue is that there is actually minimum packet size specified by IP. (This is not MTU. MTU is a Maximum before it will start fragmenting. Different issue entirely) The minimum is pretty small (I think 46 bytes, including your 24 byte TCP header), but if you don't use that much, it still sends that much.
Another issue is protocol overhead. Each packet sent by TCP causes an ACK packet to be sent back by the recipient as part of the protocol.
The result is that is you do something silly, like send one TCP packet every time the user hits a key, you could easily end up with a tremendous amount of wasted overhead data floating around.
I need to calculate total data transfer while transferring a fixed size data from client to server in TCP/IP. It includes connecting to the server, sending request,header, receiving response, receiving data etc.
More precisely, how to get total data transfer while using POST and GET method?
Is there any formula for that? Even a theoretical one will do fine (not considering packet loss or connection retries etc)
FYI I tried RFC2616 and RFC1180. But those are going over my head.
Any suggestion?
Thanks in advance.
You can't know the total transfer size in advance, even ignoring retransmits. There are several things that will stop you:
TCP options are negotiated between the hosts when the connection is established. Some options (e.g., timestamp) add additional data to the TCP header
"total data transfer size" is not clear. Ethernet, for example, adds quite a few more bits on top of whatever IP used. 802.11 (wireless) will add even more. So do HDLC or PPP going over a T1. Don't even think about frame relay. Some links may use compression (which will reduce the total size). The total size depends on where you measure it, even for a single packet.
Assuming you're just interested in the total octet size at layer 2, and you know the TCP options that will be negotiated in advance, you still can't know the path MTU. Which may change, even while the connection is in progress. Or if you're not doing path MTU discovery (which would be wierd), then the packet may get fragmented somewhere, and the remote end will see a different amount of data transfer than you.
I'm not sure why you need to know this, but I suggest that:
If you just want an estimate, watch a typical connection in Wireshark. Calculate the percent overhead (vs. the size of data you gave to TCP, and received from TCP). Use that number to estimate: it will be close enough, except in pathological situations.
If you need to know for sure how much data your end saw transmitted and received, use libpcap to capture the packet stream and check.
i'd say on average that request and response have about 8 lines of headers each and about 30 chars per line. Then allow for the size increase of converting any uploaded binary to Base64.
You didn't say if you also want to count TCP packet headers, in which case you could assume an MTU of about 1500 so add 16 bytes (tcp header) per 1500 data bytes
Finally, you could always setup a packet sniffer and count actual bytes for a sample of data.
oh yeah, and you may need to allow for deflate/gzip encoding as well.