v4l2loopback SMPTE color bars in Genymotion - stream

I'm running v4l2loopback on a Ubuntu 18.04 machine v4l2-ctl and virtualbox installed.
I use command below to initialize a loopback camera:
sudo modprobe v4l2loopback video_nr=2 card_label="Hello world" exclusive_caps=1 devices=1
v4l2-ctl --device=/dev/video2 --all
and output form the second command above is:
Driver Info (not using libv4l2):
Driver name : v4l2 loopback
Card type : Hello world
Bus info : platform:v4l2loopback-000
Driver version: 5.3.18
Capabilities : 0x85208000
Video Memory-to-Memory
Read/Write
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x05208000
Video Memory-to-Memory
Read/Write
Streaming
Extended Pix Format
Priority: 0
Format Video Output:
Width/Height : 416/720
Pixel Format : 'YU12'
Field : None
Bytes per Line : 416
Size Image : 449280
Colorspace : sRGB
Transfer Function : Default (maps to sRGB)
YCbCr/HSV Encoding: Default (maps to ITU-R 601)
Quantization : Default (maps to Limited Range)
Flags :
Streaming Parameters Video Capture:
Frames per second: 30.000 (30/1)
Read buffers : 2
Streaming Parameters Video Output:
Frames per second: 30.000 (30/1)
Write buffers : 2
User Controls
keep_format 0x0098f900 (bool) : default=0 value=0
sustain_framerate 0x0098f901 (bool) : default=0 value=0
timeout 0x0098f902 (int) : min=0 max=100000 step=1 default=0 value=0
timeout_image_io 0x0098f903 (bool) : default=0 value=0
Now I can feed the input from my desktop
sudo ffmpeg -f x11grab -r 25 -s 416x768 -i :0.0+0,0 -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f v4l2 /dev/video2
Or my stream OBS:
ffmpeg -f flv -listen 1 -i rtmp://localhost:1935/live/app -f v4l2 /dev/video2
And both work perfect; because I can view the output using WebRTC, Chrome, Firefox, and ffplay:
ffplay /dev/video2
My machine also has a webcam running on /dev/video0 which works perfect with genymotion.
But when I choose my "Hello world", genymotion exports noise (SMPTE color bars) as result.
Whats wrong with my Genymotion? I found that there are differences between UVC output and v4l2loopback.

Can you provide the logs of the Genymotion Emulator, located here ~/.Genymobile/Genymotion/deployed/<yourdevice>/genymotion-player.log, there might be interesting insights in there.

Related

How to stream cv2.VideoWriter frames to and RTSP server

Environment: Docker, Ubuntu 20.04, OpenCV 3.5.4, FFmpeg 4.2.4
Im currently reading the output of a cv2.VideoCapture session using the CV_FFMPEG backend and successfully writing that back out in real time to a file using cv2.VideoWriter. The reason I am doing this is to drawing bounding boxes on the input and saving it to a new output.
The problem is I am doing this in a headless environment (Docker container). And I’d like to view what's being written to cv2.VideoWriter in realtime.
I know there are ways to pass my display through using XQuartz for example so I could use cv2.imshow. But what I really want to do is write those frames to an RTSP Server. So not only my host can "watch" but also other hosts could watch too.
After the video is released I can easily stream the video to my RTSP Server using this command.
ffmpeg -re -stream_loop -1 -i output.mp4 -c copy -f rtsp rtsp://rtsp_server_host:8554/stream
Is there anyway to pipe the frames as they come in to the above command? Can cv2.VideoWriter itself write frames to an RTSP Server?
Any ideas would be much appreciated! Thank you.
After much searching I finally figured out how to do this with FFmpeg in a subprocess. Hopefully this helps someone else!
def open_ffmpeg_stream_process(self):
args = (
"ffmpeg -re -stream_loop -1 -f rawvideo -pix_fmt "
"rgb24 -s 1920x1080 -i pipe:0 -pix_fmt yuv420p "
"-f rtsp rtsp://rtsp_server:8554/stream"
).split()
return subprocess.Popen(args, stdin=subprocess.PIPE)
def capture_loop():
ffmpeg_process = open_ffmpeg_stream_process()
capture = cv2.VideoCapture(<video/stream>)
while True:
grabbed, frame = capture.read()
if not grabbed:
break
ffmpeg_process.stdin.write(frame.astype(np.uint8).tobytes())
capture.release()
ffmpeg_process.stdin.close()
ffmpeg_process.wait()

Raspberry Pi MJPG-Streamer low latency

I've built a raspberry pi robot. Now I want to stream video from Raspberry Pi onboard camera. I followed this tutorial:
http://blog.miguelgrinberg.com/post/how-to-build-and-run-mjpg-streamer-on-the-raspberry-pi/page/2
So I finally made it working, but now I want to get as low latency as possible. It's important to have low latency, cuz controlling a robot with such a lag is impossible.
Any advise ?
Have a nice day!
You should probably ask this on https://raspberrypi.stackexchange.com/
All potent solutions that can be found as by now use raspivid. It directly encodes the video as H.264/MPEG which is much more efficient as capturing every single frame.
The one which works out best for me so far is
- first on you raspberry pi
raspivid -t 999999 -w 1080 -h 720 -fps 25 -hf -b 2000000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=<IP-OF-PI> port=5000
on your PC/viewing device
gst-launch-1.0 -v tcpclientsrc host=<IP-OF-PI> port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false
Source: http://pi.gbaman.info/?p=150
I think I have found from experimentation that the camera board does most of the processing relieveing the raspi from much load at all. You can see this by running top on the pi as it captures and streams.
First I run the following on a linux client:
nc -l -p 5001 | mplayer -fps 31 -cache 512 -
Then I run the following on the raspi:
/opt/vc/bin/raspivid -t 999999 -o -w 1920 -h 1080 - | nc 192.168.1.__ 5001
This was done over an ethernet connection from raspi to linux desktop both connected to a common ethernet hub.
I have made the following observations:
these settings give me a pretty low lag (<100ms)
increasing the cache size (on the client) only leads to a larger lag, since the client will buffer more of the stream before it starts
decreasing the cache size below some lower limit (512 for me) leads to a player error: "Cannot seek backward in linear streams!"
specifying dimensions less than the default 1920x1080 leads to longer delays for smaller dimensions especially when they are less than 640x480
specifying bitrates other than the default leads to longer delays
I'm not sure what the default bitrate is
for any of the scenarios that cause lag, it seems that the lag decreases gradually over time and most configurations I tried seemed to have practically no lag after a minute or so
It's unfortunate that very little technical information seems to be available on the board apart from what commands to run to make it operate. Any more input in the comments or edits to this answer would be appreciated.
I realise this is an old post but I recently needed to do something similar so I created a node Raspberry Pi MJpeg Server were you can pass the compression quality and timeout (number of frames per second).
Start the server:
node raspberry-pi-mjpeg-server.js -p 8080 -w 1280 -l 1024 -q 65 -t 100
Options:
-p, --port port number (default 8080)
-w, --width image width (default 640)
-l, --height image height (default 480)
-q, --quality jpeg image quality from 0 to 100 (default 85)
-t, --timeout timeout in milliseconds between frames (default 500)
-h, --help display this help
-v, --version show version
Open sourced as I'm sure it will help others.

ffmpeg udp/tcp stream receive frame not same as sent

I am streaming a video on raspberrypi using command:
ffmpeg -re -threads 2 -i sample_video.m2v -f mpegts - | \
ffmpeg -f mpegts -i - -c copy -f mpegts udp://192.168.1.100:12345
The remote PC with 192.168.1.100 uses ffmpeg library to listen to the input stream. For example:
informat = ffmpeg::av_find_input_format("mpegts");
avformat_open_input(&pFormatCtx, "udp://192.168.1.100:12345", informat, options);
However, when I compute the hash value of each decoded frame on two sides (i.e. raspberrypi and PC), they DON'T MATCH at all. A weird thing is, among ~2000 frames, there are in total ~10 frames whose hash value are the same on the sender and receiver side. The match result look like this:
00000....00011000...00011110000...000
where 0 indicates non-match and 1 indicates match. The matched frame appeared 2~6 in sequence and appeared rarely while most of the other frames has different hash value.
The hash is computed on the frame data buffer extracted using avpicture_layout(). On the Pi side, I just stream the video to a local port and there's a local process using the same code to decode and hash the frames:
ffmpeg -re -threads 2 -i sample_video.m2v -f mpegts - | \
ffmpeg -f mpegts -i - -c copy -f mpegts udp://localhost:12345
...
The streaming source raspberry pi, is connected directly to the PC using cable. I don't think it is a packet loss problem. Because, first, I rerun the same process several times and the hash value of the received frames are the same (otherwise the result should be different because packet loss is probabilistic). Secondly, I even try to stream on tcp://192.168.1.100:12345 (and "tcp://192.168.1.100:12345?listen" on PC), and the received frame hash are still the same - different than the hash result on the Pi.
So, does anyone know why the streaming to a remote address will yield different decoded frames? Maybe I am missing some details.
Thanks in advance!!

ffmpeg - Continuously stream webcam to single .jpg file (overwrite)

I have installed ffmpeg and mjpeg-streamer. The latter reads a .jpg file from /tmp/stream and outputs it via http onto a website, so I can stream whatever is in that folder through a web browser.
I wrote a bash script that continuously captures a frame from the webcam and puts it in /tmp/stream:
while true
do
ffmpeg -f video4linux2 -i /dev/v4l/by-id/usb-Microsoft_Microsoft_LifeCam_VX-5000-video-index0 -vframes 1 /tmp/stream/pic.jpg
done
This works great, but is very slow (~1 fps). In the hopes of speeding it up, I want to use a single ffmpeg command which continuously updates the .jpg at, let's say 10 fps. What I tried was the following:
ffmpeg -f video4linux2 -r 10 -i /dev/v4l/by-id/usb-Microsoft_Microsoft_LifeCam_VX-5000-video-index0 /tmp/stream/pic.jpg
However this - understandably - results in the error message:
[image2 # 0x1f6c0c0] Could not get frame filename number 2 from pattern '/tmp/stream/pic.jpg'
av_interleaved_write_frame(): Input/output error
...because the output pattern is bad for a continuous stream of images.
Is it possible to stream to just one jpg with ffmpeg?
Thanks...
You can use the -update option:
ffmpeg -y -f v4l2 -i /dev/video0 -update 1 -r 1 output.jpg
From the image2 file muxer documentation:
-update number
If number is nonzero, the filename will always be interpreted as just a
filename, not a pattern, and this file will be continuously overwritten
with new images.
It is possible to achieve what I wanted by using:
./mjpg_streamer -i "input_uvc.so -r 1280×1024 -d /dev/video0 -y" -o "output_http.so -p 8080 -w ./www"
...from within the mjpg_streamer's directory. It will do all the nasty work for you by displaying the stream in the browser when using the address:
http://{IP-OF-THE-SERVER}:8080/
It's also light-weight enough to run on a Raspberry Pi.
Here is a good tutorial for setting it up.
Thanks for the help!

How can I extract audio from video with ffmpeg? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 days ago.
The community reviewed whether to reopen this question 5 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I tried the following command to extract audio from video:
ffmpeg -i Sample.avi -vn -ar 44100 -ac 2 -ab 192k -f mp3 Sample.mp3
but I get the following output
libavutil 50.15. 1 / 50.15. 1
libavcodec 52.72. 2 / 52.72. 2
libavformat 52.64. 2 / 52.64. 2
libavdevice 52. 2. 0 / 52. 2. 0
libavfilter 1.19. 0 / 1.19. 0
libswscale 0.11. 0 / 0.11. 0
libpostproc 51. 2. 0 / 51. 2. 0
SamplE.avi: Invalid data found when processing input
Can anyone help, please?
To extract the audio stream without re-encoding:
ffmpeg -i input-video.avi -vn -acodec copy output-audio.aac
-vn is no video.
-acodec copy says use the same audio stream that's already in there.
Read the output to see what codec it is, to set the right filename extension.
To encode a high quality MP3 or MP4 audio from a movie file (eg AVI, MP4, MOV, etc) or audio file (eg WAV), I find it's best to use -q:a 0 for variable bit rate and it's good practice to specify -map a to exclude video/subtitles and only grab audio:
ffmpeg -i sample.avi -q:a 0 -map a sample.mp3
If you want to extract a portion of audio from a video use the -ss option to specify the starting timestamp, and the -t option to specify the encoding duration, eg from 3 minutes and 5 seconds in for 45 seconds:
ffmpeg -i sample.avi -ss 00:03:05 -t 00:00:45.0 -q:a 0 -map a sample.mp3
The timestamps need to be in HH:MM:SS.xxx format or in seconds.
If you don't specify the -t option it will go to the end.
You can use the -to option instead of the -t option, if you want to specify the range, eg for 45 seconds: 00:03:05 + 45 = 00:03:50
Working example:
Download ffmpeg
Open a Command Prompt (Start > Run > CMD) or on a Mac/Linux open a Terminal
cd (the change directory command) to the directory with the ffmeg.exe, as depicted.
Issue your command and wait for the output file (or troubleshoot any errors)
Windows
Mac/Linux
Extract all audio tracks / streams
This puts all audio into one file:
ffmpeg -i input.mov -map 0:a -c copy output.mov
-map 0:a selects all audio streams only. Video and subtitles will be excluded.
-c copy enables stream copy mode. This copies the audio and does not re-encode it. Remove -c copy if you want the audio to be re-encoded.
Choose an output format that supports your audio format. See comparison of container formats.
Extract a specific audio track / stream
Example to extract audio stream #4:
ffmpeg -i input.mkv -map 0:a:3 -c copy output.m4a
-map 0:a:3 selects audio stream #4 only (ffmpeg starts counting from 0).
-c copy enables stream copy mode. This copies the audio and does not re-encode it. Remove -c copy if you want the audio to be re-encoded.
Choose an output format that supports your audio format. See comparison of container formats.
Extract and re-encode audio / change format
Similar to the examples above, but without -c copy. Various examples:
ffmpeg -i input.mp4 -map 0:a output.mp3
ffmpeg -i input.mkv -map 0:a output.m4a
ffmpeg -i input.avi -map 0:a -c:a aac output.mka
ffmpeg -i input.mp4 output.wav
Extract all audio streams individually
This input in this example has 4 audio streams. Each audio stream will be output as single, individual files.
ffmpeg -i input.mov -map 0:a:0 output0.wav -map 0:a:1 output1.wav -map 0:a:2 output2.wav -map 0:a:3 output3.wav
Optionally add -c copy before each output file name to enable stream copy mode.
Extract a certain channel
Use the channelsplit filter. Example to get the Front Right (FR) channel from a stereo input:
ffmpeg -i stereo.wav -filter_complex "[0:a]channelsplit=channel_layout=stereo:channels=FR[right]" -map "[right]" front_right.wav
channel_layout is the channel layout of the input. It is not automatically detected so you must provide the layout name.
channels lists the channel(s) you want to extract.
See ffmpeg -layouts for audio channel layout names (for channel_layout) and channel names (for channels).
Using stream copy mode (-c copy) is not possible to use when filtering, so the audio must be re-encoded.
See FFmpeg Wiki: Audio Channels for more examples.
What's the difference between -map and -vn?
ffmpeg has a default stream selection behavior that will select 1 stream per stream type (1 video, 1 audio, 1 subtitle, 1 data).
-vn is an old, legacy option. It excludes video from the default stream selection behavior. So audio, subtitles, and data are still automatically selected unless told not to with -an, -sn, or -dn.
-map is more complicated but more flexible and useful. -map disables the default stream selection behavior and ffmpeg will only include what you tell it to with -map option(s). -map can also be used to exclude certain streams or stream types. For example, -map 0 -map -0:v would include all streams except all video.
See FFmpeg Wiki: Map for more examples.
Errors
Invalid audio stream. Exactly one MP3 audio stream is required.
MP3 only supports 1 audio stream. The error means you are trying to put more than 1 audio stream into MP3. It can also mean you are trying to put non-MP3 audio into MP3.
WAVE files have exactly one stream
Similar to above.
Could not find tag for codec in stream #0, codec not currently supported in container
You are trying to put an audio format into an output that does not support it, such as PCM (WAV) into MP4.
Remove -c copy, choose a different output format (change the file name extension), or manually choose the encoder (such as -c:a aac).
See comparison of container formats.
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
This is a useless, generic error. The actual, informative error should immediately precede this generic error message.
Seems like you're extracting audio from a video file & downmixing to stereo channel.
To just extract audio (without re-encoding):
ffmpeg.exe -i in.mp4 -vn -c:a copy out.m4a
To extract audio & downmix to stereo (without re-encoding):
ffmpeg.exe -i in.mp4 -vn -c:a copy -ac 2 out.m4a
To generate an mp3 file, you'd re-encode audio:
ffmpeg.exe -i in.mp4 -vn -ac 2 out.mp3
-c (select codecs) & -map (select streams) options:
-c:a -> select best supported audio (transcoded)
-c:a copy -> best supported audio (copied)
-map 0:a -> all audio from 1st (audio) input file (transcoded)
-map 0:0 -> 1st stream from 1st input file (transcoded)
-map 1:a:0 -> 1st audio stream from 2nd (audio) input file (transcoded)
-map 1:a:1 -c:a copy -> 2nd audio stream from 2nd (audio)input file (copied)
ffmpeg -i sample.avi will give you the audio/video format info for your file. Make sure you have the proper libraries configured to parse the input streams. Also, make sure that the file isn't corrupt.
The command line is correct and works on a valid video file. I would make sure that you have installed the correct library to work with mp3, install lame o probe with another audio codec.
Usually
ffmpeg -formats
or
ffmpeg -codecs
would give sufficient information so that you know more.
To encode mp3 audio ffmpeg.org shows the following example:
ffmpeg -i input.wav -codec:a libmp3lame -qscale:a 2 output.mp3
I extracted the audio from a video just by replacing input.wav with the video filename. The 2 means 190 kb/sec. You can see the other quality levels at my link above.
For people looking for the simpler way to extract audio from a video file while retaining the original video file's parameters, you can use:
ffmpeg -i <video_file_name.extension> <audio_file_name.extension>
For example, running:
ffmpeg -i screencap.mov screencap.mp3
extracts an mp3 audio file from a mov video file.
Here's what I just used:
ffmpeg -i my.mkv -map 0:3 -vn -b:a 320k my.mp3
Options explanation:
my.mkv is a source video file, you can use other formats as well
-map 0:3 means I want 3rd stream from video file. Put your N there - video files often has multiple audio streams; you can omit it or use -map 0:a to take the default audio stream. Run ffprobe my.mkv to see what streams does the video file have.
my.mp3 is a target audio filename, and ffmpeg figures out I want an MP3 from its extension. In my case the source audio stream is ac3 DTS and just copying wasn't what I wanted
320k is a desired target bitrate
-vn means I don't want video in target file
Creating an audio book from several video clips
First, extracting the audio (as `.m4a) from a bunch of h264 files:
for f in *.mp4; do ffmpeg -i "$f" -vn -c:a copy "$(basename "$f" .mp4).m4a"; done
the -vn output option disables video output (automatic selection or mapping of any video stream). For full manual control see the -map option.
Optional
If there's an intro of, say, 40 seconds, you can skip it with the -ss parameter:
for f in *.m4a; do ffmpeg -i "$f" -ss 00:00:40 -c copy crop/"$f"; done
To combine all files in one:
ffmpeg -f concat -safe 0 -i <(for f in ./*.m4a; do echo "file '$PWD/$f'"; done) -c copy output.m4a
If the audio wrapped into the avi is not mp3-format to start with, you may need to specify -acodec mp3 as an additional parameter. Or whatever your mp3 codec is (on Linux systems its probably -acodec libmp3lame). You may also get the same effect, platform-agnostic, by instead specifying -f mp3 to "force" the format to mp3, although not all versions of ffmpeg still support that switch. Your Mileage May Vary.
To extract without conversion I use a context menu entry - as file manager custom action in Linux - to run the following (after having checked what audio type the video contains; example for video containing ogg audio):
bash -c 'ffmpeg -i "$0" -map 0:a -c:a copy "${0%%.*}".ogg' %f
which is based on the ffmpeg command ffmpeg -i INPUT -map 0:a -c:a copy OUTPUT.
I have used -map 0:1 in that without problems, but, as said in a comment by #LordNeckbeard, "Stream 0:1 is not guaranteed to always be audio. Using -map 0:a instead of -map 0:1 will avoid ambiguity."
Use -b:a instead of -ab as -ab is outdated now, also make sure your input file path is correct.
To extract audio from a video I have used below command and its working fine.
String[] complexCommand = {"-y", "-i", inputFileAbsolutePath, "-vn", "-ar", "44100", "-ac", "2", "-b:a", "256k", "-f", "mp3", outputFileAbsolutePath};
Here,
-y - Overwrite output files without asking.
-i - FFmpeg reads from an arbitrary number of input “files” specified by the -i option
-vn - Disable video recording
-ar - sets the sampling rate for audio streams if encoded
-ac - Set the number of audio channels.
-b:a - Set the audio bitrate
-f - format
Check out this for my complete sample FFmpeg android project on GitHub.

Resources