Low fps (choppy video), no sound, "No JPEG data found in image" when using webcam/hdmi-usb dongle - vlc

This took me too long to figure out, so in the hopes of helping anyone out there dealing with any of these issues, I wanted to post the solution. But first, the problems:
I bought one of those cheap HDMI->USB dongles and connected my PS3 as a video source. On vlc the image looked crisp, but I was getting no sound, and the video was really choppy. Checking the codec tab in the "info" section, I saw I was getting 1080p at 5 fps. I thought I got a defective dongle, but decided to check with other apps. tvtime/xawtv gave me great framerate, but low resolution that I couldn't change, cheese allowed me to set all the options, and I was getting good framerate, and good resolution (but no sound), and then I finally tried obs which gave me a perfect result. So clearly the dongle is fine, and the problem was with vlc.
See my answer below for the solution to all those problems (and more!)

I found, through much research and experimentation, that the reason the video was choppy in vlc was because it was using the default "chroma" of YUV2, which if I am not mistaken is uncompressed. (You can check your webcam/dongle's capabilities by running: v4l2-ctl --list-formats-ext -d /dev/video0 where /dev/video0 is your device)
The correct setting to overcome this is mjpg. However, that results in a flood of errors saying:
[mjpeg # 0x7f4e0002fcc0] No JPEG data found in image
This is caused by the fact that the default resolution and framerate (1080p#60fps) overwhelm what I guess is the mjpeg decoder. Setting it to 720p, or lowering the framerate to 30fps prevents the errors.
Next, the sound was missing, and this is due to the fact that I am using pulseaudio and vlc cannot figure out which source to use.
I found the pulse source by running:
pactl list short sources
which yielded:
alsa_input.usb-MACROSILICON_USB_Video-02.multichannel-input
You can test that this is the correct source by running:
vlc pulse://alsa_input.usb-MACROSILICON_USB_Video-02.multichannel-input
I found that to combine the v4l2 video source with the correct pulseaudio sink, you have to set the audio via the input-slave parameter to vlc, but unfortunately, that did not work for me as specified in the guides, and instead I had to set the video source as the slave. The final commands that worked for me were either of:
720p:
vlc pulse://alsa_input.YOUR-SOURCE-HERE-input --input-slave=v4l2:///dev/video0:chroma=mjpg:width=1280:height=720
1080p#30fps:
vlc pulse://alsa_input.YOUR-SOURCE-HERE-input --input-slave=v4l2:///dev/video0:chroma=mjpg:fps=30

Related

rtsp stream of an IP camera is much more delayed in VLC than in the NVR

I have an IP camera that i can view in VLC via the link rtsp://admin:admin#192.168.1.199:554/mpeg4/ch0/main/av_stream but i noticed there is a significant delay to the video in vlc compared to when the camera is viewed in the NVR. The vlc has a delay of 4-6 seconds while in the nvr its barely noticeable at all less than 1 second of delay.
I need to know why that is so i can then plan out what methods/libraries to use in the program im going to make. It helps to know why so that a possible work aroung maybe explored.
Is this a problem inherent to vlc or a limitation to rtsp?
Is there any way i can reduce this delay?
First get sure that your camera has no issue with getting multiple streams. Deactivate your camera on NVR and check if you have a better latency.
VLC use per default rtsp/rtp over TCP so force vlc to use rtsp/rtp over UDP just google about the vlc argument.
And verify if u have better latency.
As BijayRegmi wrote be aware of the default buffering.
Also you can try ffplay from ffmpeg libary and open the rtsp stream with it. There u have more informations about the health of the stream like package loss etc. Also this gives u an second option to verify your stream/latency, then u should know wich part produce the latency.

VLC Player Keeps Resizing While When It is Streaming from RTSP Server

I am using a proprietary RTSP server (I don't have access to the source code) running on a Linaro based embedded system. I am connecting to the device using WiFi and use VLC player to watch the stream. Every often, VLC player's window resizes to different sizes.
Is this a normal behavior in RTSP stream (resizing the video)?
-If yes, what is causing this change? Is it my WiFi bandwidth?
-If not, what are the suggested steps to find the root cause of this problem.
Thank you
Ahmad
Is this a normal behavior in RTSP stream (resizing the video)?
Yes, the RTSP DESCRIBE Request should give info about the resolution. (See this discussion)
-If yes, what is causing this change? Is it my WiFi bandwidth?
Most probably not. However I guess more info would be needed on your bandwidth and network setup.
-If not, what are the suggested steps to find the root cause of this problem.
Option 1: Try to disable (uncheck) VLC's preference to resize the interface to native video size, and see what happens.
Also see the following post over at superuser discussing about automatic resizing options
Option 2: Enable VLC's verbose mode (console log) and see what errors or messages come up. This often helps, and points into new directions to look for solutions.
Option 3: It could be a problem with how information is encoded in the stream concerning the resolution. You would need to get in touch with the vendor of your RTSP server software in order to dig deeper.
Open the VLC player press (Ctrl + P) or go to
Tools -> Prefrences -> Interface (look for below options)
Integrated video in interface [Check]
Resize interface to video size [UnCheck]
Must close and open again the VLC player

HLS Streaming on IPad with IOS 7 / 8 causes 10 seconds freezed frame - no clue why

We have a problem with HLS h.264 mp4 on IPad devices using HLS streaming on IOS 7 and 8:
The first 9-15 seconds (the length of the first TS segment) only shows the first key frame (IDR) of the second TS segment, while the sound plays normally. When the second segment begins to play, the video continues as it should.
HLS segmenter is a wowza with 10 seconds segment length. The encoding software we use is TMPG, latest version (uses x264). The funny thing is, that handbrake, xmedia recode, adobe me deliver videos which work. I am aware of the fact, that this hints to a bug within our encoding software, but if someone already had that problem with another software / segmenter combination and fixed it, I would like to know what the source of the problem was.
What we already tried:
changing almost every setting so that it sticks as close as possible to apple's recommendations
changing GOP structure, GOP length, encoding parameters which influence efficiency of encoding
analysis of the TS segments created by wowza, they are fine and begin all with keyframes
contact TMPG/Apple/Wowza support
So, did anyone stumble upon this problem before? Has anyone solved it?
EDIT:
It seems that TMPGEnc uses a defective x264 implementation. The mediastreamvalidator tool from Apple returned an error stating that our TS segment "does not contain any IDR access unit with a SPS and a PPS" - which it does actually, but obviously in the wrong places if that somehow matters.
Whatever tool you use for segmenting is not ensuring that the segment begins with an SPS+PPS+IDR. this could be an issue with your encoder or your segmenter. Basically, decoding can nor begin until all three of these thing are encountered in the player. Try using mediafilessegmenter and mediastreamvarifier from apple to analyze the problem.

iOS SDK avcodec_decode_video Optimization

I've recently started a project that relies on streaming FLV directly to an iOS device. As most famous i went with ffmpeg (and an iOS wrapper - kxmovie). To my surprise iPhone 4 is incapable of playing even SD low-bitrate FLV videos. The current implementation i'm using is decoding the video/audio/sub frames in dispatch_async while loop and copies the YUV frame data to a object, where the object is parsed to 3 textures - Y/U/V (in case of RGB color space - just parse the data) and rendered on screen. After much trial and error, i've decided to kill the whole rendering pipeline and leave only the avcodec_decode_video2 function to run. Surprisingly the FPS did not improve and videos are still unplayable.
My question is: What can i do to improve the performance of avcodec_decode_video2?
Note:
I've tried a few commercial apps and they play the same file perfectly fine with no more than 50-60% cpu usage.
The library is based off the 1.2 branch and this is are the build args:
'--arch=arm',
'--cpu=cortex-a8',
'--enable-pic',
"--extra-cflags='-arch armv7'",
"--extra-ldflags='-arch armv7'",
"--extra-cflags='-mfpu=neon -mfloat-abi=softfp -mvectorize-with-neon-quad'",
'--enable-neon',
'--enable-optimizations',
'--disable-debug',
'--disable-armv5te',
'--disable-armv6',
'--disable-armv6t2',
'--enable-small',
'--disable-ffmpeg',
'--disable-ffplay',
'--disable-ffserver',
'--disable-ffprobe',
'--disable-doc',
'--disable-bzlib',
'--target-os=darwin',
'--enable-cross-compile',
#'--enable-nonfree',
'--enable-gpl',
'--enable-version3',
And according to Instruments the following functions take about 30% CPU usage each:
Running Time Self Symbol Name
37023.9ms 32.3% 13874,8 ff_h264_decode_mb_cabac
34626.2ms 30.2% 9194,7 loop_filter
29430.0ms 25.6% 173,8 ff_h264_hl_decode_mb
As it turns out, even with NEON support, FFmpeg is still executed on the CPU and thus it can't decode faster than that. There are apps that use ffmpeg and HW decoder, my best guess would be that they strip the header and feed Apple's AssetReader the raw h264 data.
just for the fun of it see what kind of performance you get from this, it does seem to play flv's quick but I have not tested it on the iPhone 4
https://github.com/mooncatventures-group/WebStreamX_flv_demo
You should use --enable-asm optimization parameter to boost performance for %10-15 more.
Also, you must install the latest gas-preprocessor.pl

DirectX.Capture FrameRates

I'm using DirectX.Capture library to save to an AVI fomr Webcam. I need video to be saved to have 50fps or more, but when i use this:
capture.FrameRate = 59.994;
FrameRate doesn't change at all. It had 30 before that line and passing that line it keeps its 30. I tried other values, even 20 and 10, and nothing changes.
What else should i do so i can be able to change that value? or it is something regarding my hardware and i can hope it works in other machine?
Please help me, i don't know what to do.
Thanx
The source material (video, app/etc), is probably only being updated at 30fps, either because that is the way the video codec or app behaves, or because you have vsync turned on in the target app (check vsync settings, it might be getting forced by the video card drivers if there is hardware acceleration). The behaviour of DirectX.Capture is probably to clamp to the highest available framerate from the source.
If you really want to make the video 50fps, capture it at its native rate (30/29.97) and just resample the video using some other software (note that this would be a destructive operation since 50 is not a clean multiple of 30). This will be no different from what DX capture would do if you could force it at 50fps (even if its nonsensical due to the source material being at a lower framerate). FYI most video files are between 25 and 30 FPS.

Resources