Does AVPlayer auto adjust when playing a m3u8 playlist? - ios

After spending some time setting up a transcoding process on AWS I am finding that the loading times for videos has not been lowered as expected with HLS (m3u8).
It seems that if I am using AVPlayer directly, without AVPlayerViewController, I a may need to do the managing of the video stream quality myself? My understanding was that if I had a m3u8, that things would be done automatically and the best quality would be used depending on network conditions / device / etc?
So far it seems that the loading times are the same if not slightly worse than without the m3u8 if AVPlayer is used as is.
To better understand what's going on I've been trying out a few things.
1) While doing this has worked to reduce loading times, I would prefer to do a bit more than just lower it all the way when not on wfifi:
self.player?.currentItem?.preferredPeakBitRate = 1
This seems to give me a pretty low quality video but it loads pretty quickly. I have yet to figure out how to detect the actual bitrate being used though (since setting this value has improved loading times dramatically, I am going to assume AVPlayer does not handle the adjustments on its own?).
2) Also, haven't had any luck with (causes infinite spinner, even with the preferredPeakBitRate set to 1):
self.player.automaticallyWaitsToMinimizeStalling = false
3) I am open to using a third party library that might handle this, found something called VKVideoPlayer that might do some of this?
Thanks

It's possible now in iOS8 and onwards.
Following copied from Apple's documentation:
The desired limit, in bits per second, of network bandwidth
consumption for this item. SWIFT: var preferredPeakBitRate: Double
OBJECTIVE-C: #property(nonatomic) double preferredPeakBitRate
Set preferredPeakBitRate to non-zero to indicate that the player
should attempt to limit item playback to that bit rate, expressed in
bits per second.
If network bandwidth consumption cannot be lowered to meet the
preferredPeakBitRate, it will be reduced as much as possible while
continuing to play the item.

Related

How to reduce VLCJ memory and CPU consumption when playing hls?

My javafx application has 6 small windows and one large one. Each plays hls using VLCJ.
From time to time the picture freezes on some windows, so I want to somehow reduce the consumption of players on the PC.
How can i do this?
In 6 small windows, I don't need sound, if I can turn it off with a parameter, will it affect memory or CPU consumption?
At the moment, I remove the sound there with --aout=directsound and the mute() function, but perhaps the audio is still processed by the players and the consumption is not reduced.
Since these are small windows, high quality content does not need to be displayed there. Is it possible to reduce the quality of the content using the player? Can this help and how to do it?
Tried using the :adaptive-logic=highest playback parameter, but it didn't help, most likely because the content has only one high quality.
Parameters for the player are here: https://wiki.videolan.org/VLC_command-line_help/.
But there are a lot of them and I do not understand how they work, so I ask for help.
Maybe I can skip some frames, which will not be very noticeable, but can help?
Update:
Now I'm trying these options, but I don't notice much change...
--no-audio
--postproc-q=1
--ffmpeg-hw
--avcodec-skip-frame=1
--avcodec-skip-idct=1
--avcodec-skiploopfilter=1
--avcodec-hw=any
--sout-avcodec-hurry-up
--no-sout-avcodec-interlace-me

Create a timelapse from a normal video in iOS

I have two solutions to this problem:
SOLUTION A
Convert the asset to an AVMutableComposition.
For every second keep only one frame , by removing timing for all the other frames using removeTimeRange(...) method.
SOLUTION B
Use the AVAssetReader to extract all individual frames as an array of CMSampleBuffer
Write [CMSampleBuffer] back into a movie skipping every 20 frames or so as per requirement.
Convert the obtained video file to an AVMutableComposition and use scaleTimeRange(..) to reduce overall timeRange of video for timelapse effect.
PROBLEMS
The first solution is not suitable for full HD videos , the video freezes in multiple place and the seekbar shows inaccurate timing .
e.g. A 12 second timelapse might only be shown to have a duration of 5 seconds, so it keeps playing even when the seek has finished.
I mean the timing of the video gets all messed up for some reason.
The second solution is incredibly slow. For a 10 minute HD video the memory would run upto infinity since all execution is done in memory.
I am searching for a technique that can produce a timelapse for a video right away , without waiting time .Solution A kind of does that , but is unsuitable because of timing problems and stuttering.
Any suggestion would be great. Thanks!
You might want to experiment with the inbuilt thumbnail generation functions to see if they are fast/effecient enough for your needs.
They have the benefit of being optimised to generate images efficiently from a video stream.
Simply displaying a 'slide show' like view of the thumbnails one after another may give you the effect you are looking for.
There is iinfomrtaion on the key class, AVAssetImageGenerator, here including how to use it to generate multiple images:
https://developer.apple.com/reference/avfoundation/avassetimagegenerator#//apple_ref/occ/instm/AVAssetImageGenerator/generateCGImagesAsynchronouslyForTimes%3acompletionHandler%3a

iOS - how can I programmatically calculate the time limit to record audio/video with the known file limit size

I have tried to google a lot but it seems like no one have done it beforein iOS.
My issue is: my server only allow the client to upload the video / audio / image file with limited size (e.g: 30M for video, 1M for audio). With that limit, I want to figure how much time the users are allow to record audio / video. This calculation must consider the difference devices for example the iPad 3 has better camera then ipad 2 so we will have less time to record the video.
I am wondering if we can programmatically calculate the time limit base on the known file size.
Thanks,
Luan.
When working with large amounts of data such as video and audio, compression should play a part in your calculation.
Compression results can vary greatly depending on what you are recording and as a result it would be unrealistic to try to forecast a certain maximum duration.
I can think of two options:
Predetermine very restrictive recording times per device (I believe it is possible in iOS to tell an iPad3 from an iPad2)
Figure out a way to re-encode a smaller part of the video until it is within limits.
Best of luck!
Cantgetright has the reason this is hard described perfectly.
What you really care about is megapixels of the camera (definition), worst case storage size of one second of video, and how many free megs are on the phone as well.
If you know most of these elements, time can be the constraint by which you determine the last one.
Always overestimate size to guarantee it'll work no matter what. People don't know how big 5secs of video is on their iDevices anyway so you can be stingy with allotted time

MPMoviePlayer Buffer size/Adjustment

I have been using MPMovieplayer and the playableDuration to check the available duration of a movie.
The duration always seems to be ~1 second further than my current duration and basically I would like to increase this.
I have tried to use the prepareToPlay but this seems to do nothing noticeable to the playable Duration.
I have tried to set as many parameters as possible to attempt to try and change the value via setting the defaults pre-emptively such as the MPMovieSourceType, MediaType and alike, but all to no avail.
Just to clear a few things up firstly: I am using both MPMoviePlayer and AVplayer which both play different streams simultaneously as the video/audio I am using is split.
EDIT
Seems like I overlooked the file size affecting the stream and should have read more in the apple resources then elsewhere, but as far as I can tell the issue is: the file size is too large and therefore a server side media segmenter has to be implemented.
Apple Resource on Media Segmenting

iOS AVPlayer: How to slow down a 30fps video to 1fps

I have a 30fps Quicktime .mov of still images I created with AVAssetWriter. (It's only about 10 frames long). I would like the user to be able to slow it down using a UISlider to about 1fps, but when I adjust the AVPlayer .rate property from 1 down to 0, it doesn't get anywhere near 1fps, it just stops playback (because a 0 rate is effectively stopping/pausing it, which makes sense). But how can I slow the player down to about 1fps? I think I'd need to do some math to calculate the actual rate, but that's where I'm stuck. Would it end up being something like 0.000000000000001?
Thanks!
If this was a requirement of mine I would approach this as follows (also suggested by Inafziger in the comments). Use AVAssetReader and roll my own viewer for the images. This would give you precise control using a timer as stated in your comments. Make sure you reuse some preallocated image(s) memory area (you can probably get away with space for a single image). I would probably take a pull approach like CoreAudio. When you need an image pull it from some image buffer manager class which calls AVAssetReaders read function. This way you can have N buffers that will always be available. This may be a little overkill. I do believe AVAssetReader pre decodes some amount of the movie upon initialization. This is why I say you can more than likely just get away with using a single buffer for reading image data into.
From you comment about memory issues. I do believe there are some functions in the AVAssetReader and associated classes that use the create rule.

Resources