How to control the number of buffered frames of VLCkit? - ios

I have just built VLC library for iOS at VLCKit
and using it to display a video stream. I need to make it displays in real-time with a lowest latency, so I tried to find a way to reduce the number of buffered frames (or something similar to it) before display on an UIView.
I started looking in the module MobileVLCKit but it seems no property allows me to control that.
I am wondering if the change can be accomplished on MobileVLCKit itself or on the VLC library.
If so, will I need to modify the library and rebuild it? What is the parameter should I need to change?

After spending a mount of time to look into the vlc library without successful, I tried to stream with rtsp instead of rtmp protocol and the real-time of video produced has been improved.
Thus i also found a workaround solution by setting a timer to force player moves forward the buffered frames. It might cause jagging but keep video in more real-time.

Related

Video quality selector on iOS (React Native)

We are trying to create a video quality selector in a mobile app which works the same as youtube. The quality is selected and the video plays in that quality and does not automatically switch. From the backend we have m3u8 format video files which contain all the different resolutions that the video can play in and their URLs. On web and android, we have found that we can access the contents of this m3u8 files to display the available resolutions and select the corresponding one for the resolution selected. e.g select 720p in menu and then play 720p video track.
On iOS however, we are using react native video, which is a wrapper around AVPlayer on the native side which from my research does not seem to provide the access to select the different tracks from the m3u8 file as is possible on android. selectedVideoTrack is exposed on react-native-video for ios but does not work, likely due to above statement.
Adjusting the maximum bit rate is provided on react native video api, but in order to select the quality we need to set the exact bit rate, or at least the minimum bit rate, but none of these are exposed in AVPlayer and thus not in react native player either.
One solution I have tried was to create different m3u8 file for each resolution the video supports, only providing that option in the m3u8 file so it cannot auto degrade or auto improve resolution as is the nature of HLS when a specific quality is selected. Then, once a different quality option was selected, I change the underlying url source of the video and seek to the previous position to maintain the position of the video, instead of it replaying from the beginning. This solution works, however it requires a large change to our current backend infrastructure and seems to cause some waiting time while the new url is loaded. So I would like to know if there are any other better solutions before we are forced to go forward with this one.
So my question is how are Youtube and other iOS apps with a quality selector doing things? Are they following my method or am I missing something which allows them to work with different resolution videos? Answers can be in native code as well as javascript react native solutions.

SoundCloud waveform generation mechanics and display

I am developing an app for iOS devices that is supposed to have a waveforms of music files like on SoundCloud. The problem is that I have achieved generation of waveform of fully downloaded file, how to generate a waveform of streaming audio during its playback? If someone's aware of how SoundCloud presents its waveforms please reply.
If we are talking about SoundCloud, for displaying an audio waveform, what I think is that they are -somehow- working with metadata for each specific audio to draw its desired waveform; Why it might be right? that's because the waveform will be drawn for each audio even before playing it (without waiting for streaming it). Applying the previous approach might be suitable solution for your issue.
However, I suggest to checkout this library, it might contains what are you looking for (drawing the waveform while streaming the audio file).
Also checking this Q&A might be helpful to your case.
Hope this helped.

How to add real time stamp on the streaming and recording video for every frame in Swift?

My app uses VideoCore project for live streaming to Wowza server and store the video. Also it uses AVCaptureMovieFileDataOutput to record the offline video.
I want to embed the video capturing time stamp on top-left of video, and it is not a static time. It means it is not only a static watermark but also a real video capturing time display.
For the streaming case, I have no idea for now. For the offline case, I tried to utilize AVCaptureAudioDataOutput to get every frame to add time text overlay. But this causes preview screen freezes.
Any tips are helpful.
Thank you.
My platform is Xcode7.3 + Swift2
I did some similar thing using transcodig on wowza, the transcodig menu enables image overlay, this image could be refreshed every second (or less), so if you create an image with timestamps every second, wowza takes it and put it on the stream every second. you can define where to put the image, the size and transparency.
to create the image I use PHP, but you could use another tool that enables image creation.

Removing low frequency (hiss) noise from video in iOS

I am recording videos and playing them back using AVFoundation. Everything is perfect except the hissing which is there in the whole video. You can hear this hissing in every video captured from any iPad. Even videos captured from Apple's inbuilt camera app has it.
To hear it clearly, you can record a video in a place as quiet as possible without speaking anything. It can be very easily detected through headphones and keeping volume to maximum.
After researching, I found out that this hissing is made by preamplifier of the device and cannot be avoided while recording.
Only possible solution is to remove it during post processing of audio. Low frequency noise can be removed by implementing low pass filter and noise gates. There are applications and software like Adobe Audition which can perform this operation. This video shows how it is achieved using Adobe Audition.
I have searched Apple docs and found nothing which can achieve this directly. So I want to know if there exists any library, api or open source project which can perform this operation. If not, then how can I start going in right direction because it does looks like a complex task.

MPMoviePlayer Buffer size/Adjustment

I have been using MPMovieplayer and the playableDuration to check the available duration of a movie.
The duration always seems to be ~1 second further than my current duration and basically I would like to increase this.
I have tried to use the prepareToPlay but this seems to do nothing noticeable to the playable Duration.
I have tried to set as many parameters as possible to attempt to try and change the value via setting the defaults pre-emptively such as the MPMovieSourceType, MediaType and alike, but all to no avail.
Just to clear a few things up firstly: I am using both MPMoviePlayer and AVplayer which both play different streams simultaneously as the video/audio I am using is split.
EDIT
Seems like I overlooked the file size affecting the stream and should have read more in the apple resources then elsewhere, but as far as I can tell the issue is: the file size is too large and therefore a server side media segmenter has to be implemented.
Apple Resource on Media Segmenting

Resources