Raspberry Pi MJPG-Streamer low latency - stream

I've built a raspberry pi robot. Now I want to stream video from Raspberry Pi onboard camera. I followed this tutorial:
http://blog.miguelgrinberg.com/post/how-to-build-and-run-mjpg-streamer-on-the-raspberry-pi/page/2
So I finally made it working, but now I want to get as low latency as possible. It's important to have low latency, cuz controlling a robot with such a lag is impossible.
Any advise ?
Have a nice day!

You should probably ask this on https://raspberrypi.stackexchange.com/
All potent solutions that can be found as by now use raspivid. It directly encodes the video as H.264/MPEG which is much more efficient as capturing every single frame.
The one which works out best for me so far is
- first on you raspberry pi
raspivid -t 999999 -w 1080 -h 720 -fps 25 -hf -b 2000000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=<IP-OF-PI> port=5000
on your PC/viewing device
gst-launch-1.0 -v tcpclientsrc host=<IP-OF-PI> port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false
Source: http://pi.gbaman.info/?p=150

I think I have found from experimentation that the camera board does most of the processing relieveing the raspi from much load at all. You can see this by running top on the pi as it captures and streams.
First I run the following on a linux client:
nc -l -p 5001 | mplayer -fps 31 -cache 512 -
Then I run the following on the raspi:
/opt/vc/bin/raspivid -t 999999 -o -w 1920 -h 1080 - | nc 192.168.1.__ 5001
This was done over an ethernet connection from raspi to linux desktop both connected to a common ethernet hub.
I have made the following observations:
these settings give me a pretty low lag (<100ms)
increasing the cache size (on the client) only leads to a larger lag, since the client will buffer more of the stream before it starts
decreasing the cache size below some lower limit (512 for me) leads to a player error: "Cannot seek backward in linear streams!"
specifying dimensions less than the default 1920x1080 leads to longer delays for smaller dimensions especially when they are less than 640x480
specifying bitrates other than the default leads to longer delays
I'm not sure what the default bitrate is
for any of the scenarios that cause lag, it seems that the lag decreases gradually over time and most configurations I tried seemed to have practically no lag after a minute or so
It's unfortunate that very little technical information seems to be available on the board apart from what commands to run to make it operate. Any more input in the comments or edits to this answer would be appreciated.

I realise this is an old post but I recently needed to do something similar so I created a node Raspberry Pi MJpeg Server were you can pass the compression quality and timeout (number of frames per second).
Start the server:
node raspberry-pi-mjpeg-server.js -p 8080 -w 1280 -l 1024 -q 65 -t 100
Options:
-p, --port port number (default 8080)
-w, --width image width (default 640)
-l, --height image height (default 480)
-q, --quality jpeg image quality from 0 to 100 (default 85)
-t, --timeout timeout in milliseconds between frames (default 500)
-h, --help display this help
-v, --version show version
Open sourced as I'm sure it will help others.

Related

How to retrieve the flashed binary in ESP8266

I was able to flash a micropython binary which I'd cross compiled some 6 months ago, and it was working fine. It was built from master branch at that point of time, and I did not save the code, nor the binary.
Today, when I again compiled, the binary is having problem at a point. So I want to revert back to the old binary, only problem is I'm not sure what commitID/build the master was at at that point of time ~6 months ago when my compiled binary which works fine was created.
I do have an ESP which has that binary flashed into it. So I was thinking if there is a way to retrieve the binary from the ESP?
Please let me know if this can be done somehow via ampy, etc..
Or suggest me some workaround. I'm already trying to find out the approximate commit around that time, and would cross compile again, which I'm not sure if would work as expected.
Regardless of which firmware you loaded onto your ESP8266 module (NodeMCU, MicroPython, Arduino, etc.) you can use esptool.py to dump the flash content to a file like so:
./esptool.py -p PORT -b 460800 read_flash 0 0x200000 flash_contents.bin
read_flash is the command, 0x200000 the argument for the upper memory bound (2MB).
For reading the firmware as a BIN file
For reading the firmware as a BIN file you need FIRST to connect correct the FTDI with the pins on the IR module
FTDI to IR Module as follows
FTDI 3.3 V to IR 3.3 V,
FTDI GND to IR GND,
FTDI GND to IR IO0 (flash mode - IMPORTANT otherwise it will not work),
FTDI RX to IR TXD,
FTDI TX to IR RXD
Then run the command (if the COM port is 5 and the name to extract the bin is flash-contents, otherwise you replace them to match your COM and the name you wish to have) – important the baud rate to be 9600
esptool.py -p COM5 -b 9600 read_flash 0 0x200000 flash_contents.bin
Below is the outcome for me (running under python 3.10.2 on windows 11):
PS F:\> esptool.py -p COM5 -b 9600 read_flash 0 0x200000 flash_contents.bin
esptool.py v3.2
Serial port COM5
Connecting....
Detecting chip type... Unsupported detection protocol, switching and trying
again...
Connecting...
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 10:52:1c:f8:b7:c7
Stub is already running. No upload is necessary.
2097152 (100 %)
2097152 (100 %)
Read 2097152 bytes at 0x0 in 2215.2 seconds (7.6 kbit/s)...
Hard resetting via RTS pin...
PS F:\>
Remember the esptool.py -p COM5 -b 9600 read_flash 0 0x200000 flash_contents.bin is for 2MB memory
but it is well run with esptool.py -p COM5 -b 9600 read_flash 0 0x100000 flash_contents.bin for 1MB memory as it was in my IR Module
I reduced the speed of reading the flash memory of my esp8266
460800 for "46080" I took a zero.
and successful
My system is a windows 10
C:\Users\POSITIVO\Downloads\esptool-master\esptool-master>esptool.py -p COM6 -b 46080 read_flash 0 0x400000 flash_contents3.bin
esptool.py v3.0-dev
Serial port COM6
Connecting....
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 2c:3a:e8:42:b9:f7
Uploading stub...
Running stub...
Stub running...
4194304 (100 %)
4194304 (100 %)
Read 4194304 bytes at 0x0 in 937.7 seconds (35.8 kbit/s)...
Hard resetting via RTS pin...

Gstreamer does not sink to named pipe

I'm getting different behavior when the sink of a gst-launch pipeline is a named pipe vs a normal file.
I have a gst-launch pipeline which displays video from a camera on an OMAP embedded (linux) board and delivers the video as avi via a tee.
gst-launch -v -e omx_camera device=0 do-timestamp=1 mode=0 name=cam cam.src ! "video/x-raw-yuv, format=(fourcc)NV12, width=240, height=320, framerate=30/1" ! tee name=t1 t1. ! queue ! ducatih264enc profile=100 level=50 rate-preset=low-delay bitrate=24000 ! h264parse ! queue ! avimux ! filesink location=/tmp/camerapipe t1. ! queue ! dri2videosink sync=false
If I make
filesink location=/some/real/file t1.
all is well
but I wish to read the output with a Java/opencv process, and when I do this I don't get anything to the java process. The gst-launch process does announc that it's changed to PLAY.
To simplify things instead of the java process I tail -f the named pipe
and also don't see any output, though in both cases the dri2videosink is displaying the video
With either tail or the java process, killing it also stops the gst-launch process, so obviously it's 'connected' in some sense.
Killing the gst-launch process with the tail running gets what looks like a few K, maybe 1 frame of data, after gst-launch exits.
I've tried saving to normal file and reading with the java process, that works, so I know it's not the data format.
I am trying to do the same thing, I am using opencv in c and working in ubuntu though.
I did get the following to work:
I created a named pipe in /dev/ called video_stream using mkfifo make sur eyou have permissions to read/write to/from it or just use sudo.
Play with test video to a named pipe
sudo gst-launch -e videotestsrc ! video/x-raw-yuv, framerate=20/1, width=640, height=480 ! ffenc_mpeg4 ! filesink location=/dev/video_stream
Play from web cam to a named pipe:
sudo gst-launch -e v4l2src device=/dev/video0 ! ffenc_mpeg4 ! filesink location=/dev/video_stream
I then used the face detection tutorial at
http://docs.opencv.org/doc/tutorials/objdetect/cascade_classifier/cascade_classifier.html#cascade-classifier
to test everything, but changed my input from webcam 1 to the namedpipe.
capture = cvCaptureFromCAM( -1 );
Becomes
VideoCapture capture("/dev/video_stream");
This would work, but problem with pipes and files is that closing reader makes gstreamer to stop working. Solution is to use racic's ftee program:
sudo gst-launch -e videotestsrc ! video/x-raw-yuv, framerate=20/1, width=640, height=480 ! ffenc_mpeg4 ! fdsink fd=1 | ./ftee /dev/video_stream > /dev/null 2>&1
This will output stdin of ftee to named pipe with copy to stdout (sent to /dev/null) but ftee ignored errors and closing of destination pipe. To playing from pipe and stopping does not influence gstreamer. Just try and then think about what I wrote. Not the opposite :)
Play from named pipe, anytime you want:
gst-launch filesrc location=/dev/video_stream ! autovideosink
Regarding your use with OpenCV:
VideoCapture capture("/dev/video_stream");
video stream from /dev/video_stream should be mpeg4 but I'm not sure if OpenCV will sense properly the source. You might have to experiment with provider (even gstreamer provider is available when compiled into opencv).
See api prererence when creating Capture:
VideoCapture (const String &filename, int apiPreference)
set apiPreference to proper value. I'd try ffmpeg or gstreamer.
If you want use gstreamer directly, try with appsink as a sink, that is OpenCV. This might be something like
filesrc location=/dev/video_stream ! video/h264 ! appsink
caps with video/h264 is a blind guess as I don't have ffenc_mpeg4 encoder because it's from gst 0.10 but you get the idea.
Good luck.

ffmpeg udp/tcp stream receive frame not same as sent

I am streaming a video on raspberrypi using command:
ffmpeg -re -threads 2 -i sample_video.m2v -f mpegts - | \
ffmpeg -f mpegts -i - -c copy -f mpegts udp://192.168.1.100:12345
The remote PC with 192.168.1.100 uses ffmpeg library to listen to the input stream. For example:
informat = ffmpeg::av_find_input_format("mpegts");
avformat_open_input(&pFormatCtx, "udp://192.168.1.100:12345", informat, options);
However, when I compute the hash value of each decoded frame on two sides (i.e. raspberrypi and PC), they DON'T MATCH at all. A weird thing is, among ~2000 frames, there are in total ~10 frames whose hash value are the same on the sender and receiver side. The match result look like this:
00000....00011000...00011110000...000
where 0 indicates non-match and 1 indicates match. The matched frame appeared 2~6 in sequence and appeared rarely while most of the other frames has different hash value.
The hash is computed on the frame data buffer extracted using avpicture_layout(). On the Pi side, I just stream the video to a local port and there's a local process using the same code to decode and hash the frames:
ffmpeg -re -threads 2 -i sample_video.m2v -f mpegts - | \
ffmpeg -f mpegts -i - -c copy -f mpegts udp://localhost:12345
...
The streaming source raspberry pi, is connected directly to the PC using cable. I don't think it is a packet loss problem. Because, first, I rerun the same process several times and the hash value of the received frames are the same (otherwise the result should be different because packet loss is probabilistic). Secondly, I even try to stream on tcp://192.168.1.100:12345 (and "tcp://192.168.1.100:12345?listen" on PC), and the received frame hash are still the same - different than the hash result on the Pi.
So, does anyone know why the streaming to a remote address will yield different decoded frames? Maybe I am missing some details.
Thanks in advance!!

ffmpeg - Continuously stream webcam to single .jpg file (overwrite)

I have installed ffmpeg and mjpeg-streamer. The latter reads a .jpg file from /tmp/stream and outputs it via http onto a website, so I can stream whatever is in that folder through a web browser.
I wrote a bash script that continuously captures a frame from the webcam and puts it in /tmp/stream:
while true
do
ffmpeg -f video4linux2 -i /dev/v4l/by-id/usb-Microsoft_Microsoft_LifeCam_VX-5000-video-index0 -vframes 1 /tmp/stream/pic.jpg
done
This works great, but is very slow (~1 fps). In the hopes of speeding it up, I want to use a single ffmpeg command which continuously updates the .jpg at, let's say 10 fps. What I tried was the following:
ffmpeg -f video4linux2 -r 10 -i /dev/v4l/by-id/usb-Microsoft_Microsoft_LifeCam_VX-5000-video-index0 /tmp/stream/pic.jpg
However this - understandably - results in the error message:
[image2 # 0x1f6c0c0] Could not get frame filename number 2 from pattern '/tmp/stream/pic.jpg'
av_interleaved_write_frame(): Input/output error
...because the output pattern is bad for a continuous stream of images.
Is it possible to stream to just one jpg with ffmpeg?
Thanks...
You can use the -update option:
ffmpeg -y -f v4l2 -i /dev/video0 -update 1 -r 1 output.jpg
From the image2 file muxer documentation:
-update number
If number is nonzero, the filename will always be interpreted as just a
filename, not a pattern, and this file will be continuously overwritten
with new images.
It is possible to achieve what I wanted by using:
./mjpg_streamer -i "input_uvc.so -r 1280×1024 -d /dev/video0 -y" -o "output_http.so -p 8080 -w ./www"
...from within the mjpg_streamer's directory. It will do all the nasty work for you by displaying the stream in the browser when using the address:
http://{IP-OF-THE-SERVER}:8080/
It's also light-weight enough to run on a Raspberry Pi.
Here is a good tutorial for setting it up.
Thanks for the help!

Plot RTT histogram using wireshark or other tool

I have a little office network and I'm experiencing a huge internet link latency. We have a simple network topology: a computer configured as router running ubuntu server 10.10, 2 network cards (one to internet link, other to office network) and a switch connecting 20 computers. I have a huge tcpdump log collected at the router and I would like to plot a histogram with the RTT time of all TCP streams to try to find out the best solution to this latency problem. So, could somebody tell me how to do it using wireshark or other tool?
Wireshark or tshark can give you the TCP RTT for each received ACK packet using tcp.analysis.ack_rtt which measures the time delta between capturing a TCP packet and the ACK for that packet.
You need to be careful with this as most of your ACK packets will be from your office machines ACKing packets received from the internet, so you will be measuring the RTT between your router seeing the packet from the internet and seeing the ACK from your office machine.
To measure your internet RTT you need to look for ACKS from the internet (ACKing data sent from your network). Assuming your office machines have IP addresses like 192.168.1.x and you have logged all the data on the LAN port of your router you could use a display filter like so:
tcp.analysis.ack_rtt and ip.dst==192.168.1.255/24
To dump the RTTs into a .csv for analysis you could use a tshark command like so;
tshark -r router.pcap -Y "tcp.analysis.ack_rtt and ip.dst==192.168.1.255/24" -e tcp.analysis.ack_rtt -T fields -E separator=, -E quote=d > rtt.csv
The -r option tells tshark to read from your .pcap file
The -Y option specifies the display filter to use (-R without -2 is deprecated)
The -e option specifies the field to output
The -T options specify the output formatting
You can use the mergecap utility to merge all your pcap files into one one file before running this command. Turning this output into a histogram should be easy!
Here's the 5-min perlscript inspired by rupello's answer:
#!/usr/bin/perl
# For a live histogram of rtt latencies, save this file as /tmp/hist.pl and chmod +x /tmp/hist.pl, then run:
# tshark -i wlp2s0 -Y "tcp.analysis.ack_rtt and ip.dst==192.168.0.0/16" -e tcp.analysis.ack_rtt -T fields -E separator=, -E quote=d | /tmp/hist.pl
# Don't forget to update the interface "wlp2s0" and "and ip.dst==..." bits as appropriate, type "ip addr" to get those.
#t[$m=0]=20;
#t[++$m]=10;
#t[++$m]=5;
#t[++$m]=2;
#t[++$m]=1;
#t[++$m]=0.9;
#t[++$m]=0.8;
#t[++$m]=0.7;
#t[++$m]=0.6;
#t[++$m]=0.5;
#t[++$m]=0.4;
#t[++$m]=0.3;
#t[++$m]=0.2;
#t[++$m]=0.1;
#t[++$m]=0.05;
#t[++$m]=0.04;
#t[++$m]=0.03;
#t[++$m]=0.02;
#t[++$m]=0.01;
#t[++$m]=0.005;
#t[++$m]=0.001;
#t[++$m]=0;
#h[0]=0;
while (<>) {
s/\"//g; $n=$_; chomp($n); $o++;
for ($i=$m;$i>=0;$i--) { if ($n<=$t[$i]) { $h[$i]++; $i=-1; }; };
if ($i==-1) { $h[0]++; };
print "\033c";
for (0..$m) { printf "%6s %6s %8s\n",$t[$_],sprintf("%3.2f",$h[$_]/$o*100),$h[$_]; };
}
The newer versions of tshark seem to work better with a "stdbuf -i0 -o0 -e0 " in front of the "tshark".
PS Does anyone know if wireshark has DNS and ICMP rtt stats built in or how to easily get those?
2018 Update: See https://github.com/dagelf/pping

Resources