Kurento - Blurness in the Remote stream stored images - opencv

What I did:
I am using Kurento Media Server to store the video streaming frames in the server. I can store the frames in the server by using opencv-plugin sample.
I am storing the video frames in the below two scenarios.
1) I need to take the images when the user show their faces in front of
the camera.(Note: No movements)
Issues: No issue. I can get the quality images.
2) I need to take the images when the user walks in a room.(Note: The
user is moving)
Issues: Most of the stored images are blurred in the server when they
are in moving (while walking).
What I want:
i) Is this the default behavior of the KMS (gstreamer)?
Note: I can see the local stream videos clearly in the browser while moving. But
the remote stream videos only got blurred while moving.
ii) Did anyone face this issue before. If yes, how do I solve this issue?
iii) Do I want to change any gstreamer configuration?
iv) Anyone give me a suggestion to overcome this issue?

The problem you are having is that the exposition time of your camera is high. It's like taking a picture of a car with low light.
When there is movement in the image, getting a simple frame, specially if the camera exposition time is long (due to low light conditions of low camera quality), will end in this kind of images.
On continuous video you don't notice this blurriness because there is a sequence of images, and your brain fills the gaps.
Edit
You can try to improve the quality that you are sending to the server by changing constrains on WebRTCEndpoint using properties setMaxVideoSendBandwidth and setMaxVideoRecvBandwidth. As long as there is available bandwidth you'll get a better quality.

Related

Fast video stream start

I am building an app that streams video content, something like TikTok. So you can swipe videos in table, and when new cell becomes visible the video starts playing. And it works great, except when you compare it to TikTok or Instagram or ect. My video starts streaming pretty fast but not always, it is very sensible to network quality, and sometimes even when network is great it still buffering too long. When comparing to TikTok, Instagram ... in same conditions they don't seam to have that problem. I am using JWPlayer as video hosting service, and AVPlayer as player. I am also doing async preload of assets before assigning them to PlayerItem. So my question is what else can I do to speed up video start. Do I need to do some special video preparations before uploading it to streaming service. (also I stream m3U8 files). Is there some set of presets that enables optimum streaming quality and start speed. Thanks in advance.
So theres a few things you can do.
HLS is apples preferred method of streaming to an apple device. So try to get that as much as possible for iOS devices.
The best practices when it comes to mobile streaming is offering multiple resolutions. The trick is to start with the lowest resolution available to get the video started. Then switch to a higher resolution once the speed is determined to be capable of higher resolutions. Generally this can be done quickly that the user doesn't really notice. YouTube is the best example of this tactic. HLS automatically does this, not sure about m3U8.
Assuming you are offering a UICollectionView or UITableView, try to start low resolution streams of every video on the screen in the background every time the scrolling stops. Not only does this allow you to do some cool preview stuff based off the buffer but when they click on it the video is already established. If thats too slow try just the middle video.
Edit the video in the background before upload to only be at the max resolution you expected it to be played at. There is no 4k resolution screen resolutions on any iOS device and probably never will be so cut down the amount of data.
Without getting more specifics this is all I got for now. Hope I understood your question correctly. Good luck!

Removing low frequency (hiss) noise from video in iOS

I am recording videos and playing them back using AVFoundation. Everything is perfect except the hissing which is there in the whole video. You can hear this hissing in every video captured from any iPad. Even videos captured from Apple's inbuilt camera app has it.
To hear it clearly, you can record a video in a place as quiet as possible without speaking anything. It can be very easily detected through headphones and keeping volume to maximum.
After researching, I found out that this hissing is made by preamplifier of the device and cannot be avoided while recording.
Only possible solution is to remove it during post processing of audio. Low frequency noise can be removed by implementing low pass filter and noise gates. There are applications and software like Adobe Audition which can perform this operation. This video shows how it is achieved using Adobe Audition.
I have searched Apple docs and found nothing which can achieve this directly. So I want to know if there exists any library, api or open source project which can perform this operation. If not, then how can I start going in right direction because it does looks like a complex task.

Still pin image capture responding time

I got a problem of the responding time of capturing an image from the still pin.
I have system running video stream at 320*480 from the capture pin and push a button to snap a 720P image from the still pin directly at the same time. But the responding time is too long, around 6 sec, which is not desirable.
I have try other video cap software support snapping a picture while video streaming, the responding time is similar.
I am wondering whether this a hardware problem or a software problem. And how the still pin capture is working actually.Is this from interpolation or change the resolution by hardware.
for example, the camera start at one resolution set keeps sensing and push the data to the buffer through the USB. is it possible for it immediately change to another resolution set and snap an image? is this why the system is taking picture slowly?
Or, is there a way to keep video streaming at high frame rate and snap a high resolution image immediately? No interpolation.
I am doing a project which has the function to snap a image from the video stream. The technology I use is DirectShow. And the responding time is not that long as yours. And the responding time has nothing to do with the streaming frame, according to my experience.
Usually a camera has its own default resolution. It is impossible for it mmediately change to another resolution set and snap an image. So that is not the reason.
Could you please show me some codes? And your camera's type ?

Apply filter to MPMoviePlayer thumbnails without lag

I am able to generate a CGImage from a thumbnail in MPMoviePlayer. What I want to do is apply a filter on the image and show it on the device as fast as possible (probably in a UIImageView).
The caveat here is that I need to apply the filter to every frame of the video so the user sees filtered images in a video stream, with no lag.
At the moment I get the thumbnail, apply my filter, and set my UIImageView.image to this filtered image. The filter works fine, the image shows up, but the app really lags. Is there any way to speed this up?
I've also tried using a CAdisplaylink as this has helped me speed up multiple UIImages flying around on screen at once, but it doesn't do anything in this instance. Any help would be appreciated.
Thank you.
Use Brad Larsons GPUImage framework. In short... it's brilliant.
Here's an overview:
The GPUImage framework is a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies. In comparison to Core Image (part of iOS 5.0), GPUImage allows you to write your own custom filters, supports deployment to iOS 4.0, and has a simpler interface. However, it currently lacks some of the more advanced features of Core Image, such as facial detection.
For massively parallel operations like processing images or live video frames, GPUs have some significant performance advantages over CPUs. On an iPhone 4, a simple image filter can be over 100 times faster to perform on the GPU than an equivalent CPU-based filter.
Here's the link https://github.com/BradLarson/GPUImage containing the page to the git repository, details and sample project where live processing is done with many filters.

OpenCV delay in camera output on the screen

I noticed a strange thing about OpenCV. I used one of the basic sample C programs delivered with OpenCV to show the camera output on the screen. I, however, see the output on the screen with a tiny delay compared to what the camera sees. So if I move my hand in front of the camera, it will show up on the screen with about 0.1 second delay. We are developing an application that is very sensitive to these delays. Is there a way to remove this delay such that the image transfer is instantaneous? I don't see tiny delay when I look at my camera output via Skype, for example.
Thank you very much!
P.
The openCV highgui display window is only meant for simple display of image processing results - it's not optimised for high performance or low latency.
You will have to write something to talk between the videoinput library and whatever display lib you want to use.
Just to confirm - yes, once I turned off the highgui video output, the processing speed went significantly up and the FPS along with it. Now the app is capable of getting and processing frames at 80 FPS. One solution to similar problems that doesn't require writing a new video output library is to display only every, say, tenth frame of the video to save processing power.
Thanks

Resources