I want to store large number of packets in a pcap file (say around 200000) and then send it using tcpreplay . The problem is the loop option in tcpreplay sends at a very low speed .
Now I am capturing packets using wireshark but wireshark does not respond after sending a lot of packets . How can i increase the length of the pcap file by multiplying the number of packets already stored in it ? How can I achieve good throughput using tcpreplay?
If you'd like to multiple a single pcap, consider the mergecap command, shipped with wireshark.
Regarding the packet pumping speed of tcpreplay take a look at its FAQs, and in particular consider the -T option to pick a timer mechanism that works well. I've found rdtsc to work very well. Also consider using a short trace that fits into memory, and iterating playback of that, to avoid disk I/O. For this, consider the -K option.
Related
Context:
I am trying to replay a BLF file using python-can over a vector interface with an implementation of MessageSync iterator object and can.send operating on the yielded messages. While it functions for 20-30 seconds as expected but after that it keeps raising ERR_QUEUE_FULL exception while sending CAN messages. Have tried to handle that using can_bus.flush_tx_buffer() and can_bus.reset() but to no effect. I understand that the transmit buffer gets full while the messages are written too fast at a given segment causing buffer overflow.
Usage:
replayReaderObj = LogReader(replay_file_path)
msgSyncObj = MessageSync(messages=replayReaderObj, timestamps=True)
I am iterating via msgSyncObj using a for loop and using can.send() on messages (provided message is not an error frame). Default args of gap(0.0001) and skip(60) are considered in which case replay timestamps are considerably delayed compared to the replay file. Hence gap as 0 is included in next attempt to ensure only offset difference is considered. It aligns the replay timestamps but causes buffer overflow in few seconds.
The same replay file while run over a Vector CANoe replay blocks runs just fine without any buffer issues in given replay duration(+10%).
Question:
Can anyone shed light on whether python-can and Vector CANoe (both running on Win10 PC) has different way of configuring transmit queue buffer? Any suggestions on how I can increase the transmit queue buffer used by python-can is highly appreciated along with handling such buffer overflows(since flush_tx_buffer isn't having any impact).
Note: In Vector Hardware Configuration, transmit queue size is configured as 256 messages. I am not sure if python-can uses the same configuration before I want to change it.
Additional context
OS and version: Win 10
Python version: Python 3
python-can version: 3.3.4
python-can interface/s (if applicable): Vector VN1630
There is another real ECU for acknowledgement of Tx messages. This runs fine if I keep a decent wait time(10 ms - minimum that time.sleep() in Python Windows can provide) between consecutive messages. Drawback is that with the wait time injection, it takes 6x-7x times the actual replay time.
Let me know for any further information on top of this. Sorry, I will not be able to share the trace file as it is proprietary, but any details regarding it's nature I can get back on it.
I'm using ffmpeg to read an h264 RTSP stream from a Cisco 3050 IP camera and reencode it to disk as h264 (there are reasons why I'm not just using -codec:copy).
The ffmpeg version is as follows:
ffmpeg version 3.2.6 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 6.3.0 (Alpine 6.3.0)
I've also tried with ffmpeg 2.8.14-0ubuntu0.16.04.1 and the latest ffmpeg built from source (I used this commit) and see the same behaviour as below.
The command I'm running is:
ffmpeg -rtsp_transport udp -i 'rtsp://<user>:<pw>#<ip>:554/StreamingSetting?version=1.0&action=getRTSPStream&ChannelID=1&ChannelName=Channel1' -r 10 -c:v h264 -crf 23 -x264-params keyint=60:min-keyint=60 -an -f ssegment -segment_time 60 -strftime 1 /output/%Y%m%d_%H%M%S.ts -abort_on empty_output
I get a variety of errors at a fairly steady rate of at least one per second. Here's a sample:
[rtsp # 0x7f268c5e9220] max delay reached. need to consume packet
[rtsp # 0x7f268c5e9220] RTP: missed 40 packets
[h264 # 0x55b1e115d400] left block unavailable for requested intra mode
[h264 # 0x55b1e115d400] error while decoding MB 0 12, bytestream 114567
[h264 # 0x55b1e115d400] concealing 3889 DC, 3889 AC, 3889 MV errors in I frame
The most common one is 'error while decoding MB x x, bytestream x'. This corresponds to severe corruption in the video file when played back.
I see many references to that error message on stackoverflow and elsewhere, but I've yet to find a satisfying explanation or workaround. It comes from this line which appears to correspond to missing data at the end of the stream. 'left block unavailable' comes from here and also looks like missing data.
Others have suggested using -rtsp_transport tcp instead (1, 2, 3) which in my case just gives a slightly different mix of errors, and still video corruption:
[h264 # 0x557923191b00] left block unavailable for requested intra4x4 mode -1
[h264 # 0x557923191b00] error while decoding MB 0 28, bytestream 31068
[h264 # 0x557923191b00] concealing 2609 DC, 2609 AC, 2609 MV errors in I frame
[rtsp # 0x7f88e817b220] CSeq 5 expected, 0 received.
Using Wireshark I confirmed that in both UDP and TCP mode, all of the packets are making it from the camera to the PC (sequential RTP sequence numbers without any missing) which makes me think the data is being lost after it arrives at ffmpeg.
I also see similar behaviour when running the same command against a Panasonic WV-SFV110 camera, but with less frequent errors overall. Switching from UDP to TCP on the Panasonic camera reduces but does not completely eliminate the errors/corruption.
I also tried a similar command with VLC and got similar errors (cvlc rtsp://<user>:<pw>#<ip>/MediaInput/h264 :sout='#transcode{vcodec=h264}:std{access=file, mux=ts, dst="output.ts"}) -- presumably the code hasn't diverged much since libav forked from ffmpeg.
The camera is plugged directly into a PoE port on the PC so network congestion can't be a problem. Given that the PC has enough CPU to keep up encoding the live stream, it seems to me a problem with ffmpeg that it still drops data from the TCP stream.
Qualitatively, there are several factors which seem to make the problem worse:
Higher video resolution
Higher system load on the machine running ffmpeg (e.g. transcoding to a low res .avi file produces fewer errors than transcoding to h264 VBR; using -codec:copy eliminates all errors except a couple while ffmpeg is starting up)
Greater motion within the camera view
What the does the error mean? And what can I do about it?
Looking at the initial error message:
[rtsp # 0x7f268c5e9220] max delay reached. need to consume packet
[rtsp # 0x7f268c5e9220] RTP: missed 40 packets
I guess that you are loosing UDP packets. The rest of the H.264 error messages are caused by receiving an incomplete bitstream.
Now key is to isolate the issue. Is your network dropping packets? Or is your sever too slow or overloaded receiving the UDP (RTP).
First I'd check the UDP buffer size of your OS. https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Web_Platform/5/html/Administration_And_Configuration_Guide/jgroups-perf-udpbuffer.html
If increasing the UDP buffer size doesn't help - use ffmpeg with -codec:copy to lower the CPU load. Do you still get errors?
Since you want to reencode consider using Intel Quicksync -vcodec h264_qsv or some other hardware encoder lowering your CPU load.
The question is not so much about if the PC has enough CPU. But more about identifying the bottle neck in the processing pipeline. Your H.264 encoder (x264) may over subscribe your CPU so that you get momentary peak loads that result in packet drops. Try limiting the number of threads for x264 and/or lower the quality to 'fast' or 'faster'.
It does sound like packet loss is an issue. Higher video resolution and greater motion both increase the bitrate of the encoded video stream which will increase your packet loss. Depending on which packet is lost, you will see varying errors in the decoding process as you indicated in your post.
The higher system load running ffmpeg also indicates that your network card might be dropping packets, when e.g. ffmpeg takes too long to read them while it is busy transcoding the video.
First question is what is your network topology? Streaming over the public Internet is a lot harder than streaming over your LAN. What kind of switches/routers are in the network?
Next question, what bitrate is your camera streaming at? Try reducing this and check the results. Be systematic in your approach i.e.
don't transcode at first.
just receive the video.
write it to file.
Check for packet loss/video artifacts.
start at lower bitrates e.g. 100kbps and increase this if no loss is evident
The next thing I would try to do is to increase the size of the receiver buffers. While I am not that familiar with ffmpeg, it looks like you can set it via recv_buffer_size as indicated here. You then need to work out a reasonably big enough size based on your camera configuration to store e.g. a couple (5?) of seconds of video data. Check if there are less artifacts as you increase the receiver buffer size or longer periods without artifacts.
Of course if your processor is too slow to transcode the video in real-time, you will run out of space sooner or later, in which case, you might have to transcode to a lower resolution/bitrate or use less intensive encoder settings, etc or run the transcoding on a faster machine.
Also, note that adjusting receiver buffer size will not compensate for packet loss occurring on the public Internet so the above will help assuming you're streaming on a local network that supports the bitrate of the camera. If you exceed the bandwidth of the network you can expect packet loss. In that case streaming over TCP could help somewhat (at least until the receiver buffer overruns eventually).
More things you can try if the above does not help or solve the problem completely:
Sniff the incoming traffic with wireshark or tcpdump.
Have a look at the traces. Filter the trace using "RTSP".
You should be able to see the RTP traffic where consecutive RTP packets have increasing sequence numbers e.g. 20, 21, 22, 23, etc. If you see missing sequence numbers, then you've got packet loss and try streaming over TCP. Repeat the trace while streaming over TCP. Also, remember to increase the receiver buffer size also when streaming over TCP.
In summary you have a pipeline architecture and you need to determine where in the pipeline the loss is occurring:
camera -> network -> receiver buffer (OS) -> application (ffmpeg)
I'm using Wireshark to monitor network traffinc to test a new software installed on a router. The router itself lets other networks (4g, mobile devices through usb etc) connect to it and enhance the speed on that router.
What I'm trying to do is to disconnect the connected devices and discover if there are any packet losses while doing this. I know I can simply use a filter stating "tcp.analysis.lost_segment" to track down lost packets, but how can I eventually isolate the specific device that causes the packet loss? Or even know if the reason was because of a disconnected device when there is a loss?
Also, what is the most stable method to test this with? To download a big file? To stream a video? Etc etc
All input is greatly appreciated
You can't detect lost packets solely with Wireshark or any other packet capture*.
Wireshark basically "records" what was seen on the line.
If a packet is lost, then by definition you will not see it on the line.
The * means I lied. Sort of. You can't detect them as such, but you can extremely strongly indicate them by taking simultaneous captures at/near both devices in the data exchange... then compare the two captures.
COMPUTER1<-->CAPTURE-MACHINE<-->NETWORK<-->CAPTURE-MACHINE<-->COMPUTER2
If you see the data leaving COMPUTER1, but never see it in the capture at COMPUTER2, there's your loss. (You could then move the capture machines one device closer on the network until you find the exact box/line losing your packets... or just analyze the devices in the network for eg configs, errors, etc.)
Alternately if you know exactly when the packet was sent, you could not prove but INDICATE its absence with a capture covering a minute or two before and after the packet was sent which does NOT have that packet. Such an indicator may even stand on its own as sufficient to find the problem.
I have a ton of wireshark traces containing varying amount of ISCSI packets. I need to parse out the command being sent by the initiator (in bytes) and write it to a file for each packet. I was originally going to do this manually, as it is easily viewable inside the wireshark application (see SS below), but some of these traces are huge (1-2 Gb), and it would take forever to do by hand.
I've been looking into tshark and rawshark documentation, but I'm not sure either is able to get me what I need. A friend suggested using libpcap to parse the traces myself, but from what I can tell I'd need to find some way to identify the bytes I need to pull out of each packet. Ideally I'd like to use something that recognizes it for me (ie wireshark's ISCSI dissector).
Can anyone point me in the right direction? I need some way to parse out these commands from each ISCSI packet without looking through the raw packet data and trying to identify which bytes I need. As a note - It's not always the last 16 bytes in the packet as shown above, so I can't just go through and take the last 16 bytes.
If you export the packets to PDML/XML (File->Export...->File->Save As Type PDML) you will get a nice XML file with all the protocol fields. You may be able to use this for your requirements, or use it as an index to locate the raw bytes in each packet.
I'm trying to write a c application that logs ethernet frame arrival times to a database. I been doing some analysis with wireshark and can see that it displays the arrival time of the frame. I'm going to use libcap which analyses the pcap file and I gather wireshark also uses the pcap file. My question is how does wireshark calculate the arrival time of frames?
From https://www.wireshark.org/docs/wsug_html_chunked/ChAdvTimestamps.html:
While capturing, Wireshark gets the time stamps from the libpcap (WinPcap) library, which in turn gets them from the operating system kernel.