I have a video with these specs
Format : H.264 , 1280x544
FPS : 25
Data Size : 26MB
Duration : 3:00
Data Rate : 1.17 Mbit/s
While experimenting ,I performed a removeTimeRange(range : CMTimeRange) on every other frame (total frames = 4225). This results in the video becoming 2x faster , so the duration becomes 1:30.
However when I export the video, the video becomes 12x larger in size i.e. 325MB.This makes sense since this technique is decomposing the video into about 2112 pieces and stitching it back together. Apparently, in doing so the compression among individual frames is lost, thus causing the enormous size.
This causes stuttering in the video when played with an AVPlayer and therefore poor performance.
Question : How can I apply some kind of compression while stitching back the frames so that the video can play smoothly and also be less in size?
I only want a point in the right direction. Thanks!
CODE
1) Creating an AVMutableComposition from Asset & Configuring it
func configureAssets(){
let options = [AVURLAssetPreferPreciseDurationAndTimingKey : "true"]
let videoAsset = AVURLAsset(url: Bundle.main.url(forResource: "Push", withExtension: "mp4")! , options : options)
let videoAssetSourceTrack = videoAsset.tracks(withMediaType: AVMediaTypeVideo).first! as AVAssetTrack
let comp = AVMutableComposition()
let videoCompositionTrack = comp.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
do {
try videoCompositionTrack.insertTimeRange(
CMTimeRangeMake(kCMTimeZero, videoAsset.duration),
of: videoAssetSourceTrack,
at: kCMTimeZero)
deleteSomeFrames(from: comp)
videoCompositionTrack.preferredTransform = videoAssetSourceTrack.preferredTransform
}catch { print(error) }
asset = comp }
2) Deleting every other frame.
func deleteSomeFrames(from asset : AVMutableComposition){
let fps = Int32(asset.tracks(withMediaType: AVMediaTypeVideo).first!.nominalFrameRate)
let sumTime = Int32(asset.duration.value) / asset.duration.timescale;
let totalFrames = sumTime * fps
let totalTime = Float(CMTimeGetSeconds(asset.duration))
let frameDuration = Double(totalTime / Float(totalFrames))
let frameTime = CMTime(seconds: frameDuration, preferredTimescale: 1000)
for frame in Swift.stride(from: 0, to: totalFrames, by: 2){
let timing = CMTimeMultiplyByFloat64(frameTime, Float64(frame))
print("Asset Duration = \(CMTimeGetSeconds(asset.duration))")
print("")
let timeRange = CMTimeRange(start: timing, duration : frameTime)
asset.removeTimeRange(timeRange)
}
print("duration after time removed = \(CMTimeGetSeconds(asset.duration))")
}
3) Saving the file
func createFileFromAsset(_ asset: AVAsset){
let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] as URL
let filePath = documentsDirectory.appendingPathComponent("rendered-vid.mp4")
if let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetHighestQuality){
exportSession.outputURL = filePath
exportSession.shouldOptimizeForNetworkUse = true
exportSession.timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration)
exportSession.outputFileType = AVFileTypeQuickTimeMovie
exportSession.exportAsynchronously {
print("finished: \(filePath) : \(exportSession.status.rawValue) ")
if exportSession.status.rawValue == 4{
print("Export failed -> Reason: \(exportSession.error!.localizedDescription))")
print(exportSession.error!)
}
}}}
4) Finally update the ViewController to play the new Composition!
override func viewDidLoad() {
super.viewDidLoad()
// Create the AVPlayer and play the composition
assetConfig.configureAssets()
let snapshot : AVComposition = assetConfig.asset as! AVComposition
let playerItem = AVPlayerItem(asset : snapshot)
player = AVPlayer(playerItem: playerItem)
let playerLayer = AVPlayerLayer(player: player)
playerLayer.frame = CGRect(x : 0, y : 0, width : self.view.frame.width , height : self.view.frame.height)
self.view.layer.addSublayer(playerLayer)
player?.play()
}
If you are using AVMutableComposition,you will notice that each composition may contain one or more AVCompositionTrack(or AVMutableCompositionTrack),and the best way to edit your composition was to operate each track, but not the whole composition.
but if your purpose is to faster your video's rate, editing tracks will not be necessary.
so i will try my best to tell you what i know about your question
About video Stuttering while playing
Possible reason of stuttering
notice that you are using method removeTimeRange(range: CMTimeRange),this method will remove the timeRange on composition yes, but will NOT auto fill the empty of each time range
Visualize Example
[F stand for Frame,E stand for Empty]
org_video --> F-F-F-F-F-F-F-F...
after remove time range, the composition will be like this
after_video --> F-E-F-E-F-E-F-E...
and you might think that the video will be like this
target_video --> F-F-F-F...
this is the most possible reason about stuttering during playback.
Suggested solution
So if you want to shorten your video, make it rate more faster/slower,you possible need to use the method scaleTimeRange:(CMTimeRange) toDuration:(CMTime)
Example
AVMutableComposition * project;//if this video is 200s
//Scale
project.scaleTimeRange:CMTimeRangeMake(kCMTimeZero, project.duration) toDuration:CMTimeMakeWithSeconds(100,kCMTimeMaxTimescale)
this method is to make the video faster/slower.
About the file size
a video file's size might effected by bit rate and format type ,if your using H.264,the most possible reason causing size enlarge will be bit rate.
in your code,you are using AVAssetExportSession
AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetHighestQuality
you gave the preset which is AVAssetExportPresetHighestQuality
in my own application project, after i was using this preset, the video's bit rate will be 20~30Mbps,no matter your source video's bit rate. and, well using apple's preset will not allowed you to set the bit rate manually, so.
Possible Solution
There is a third part tool called SDAVAssetExportSession,this session will allowed you to fully config your export session, you might want to try to study this code about custom export session's preset.
here is what i can tell you right now. wish could help :>
Related
We have successfully configured Subtitles/Captions in Azure Media Player which plays media on the Web side.
But, how do we configure the same for playing the media managed by AMS in iOS for Native AVPlayer? We know that captions/subtitles can be played in native iOS player with Sidecar WebVTT file, but is the "transcript.vtt" file generated by AMS, the Sidecar WebVTT file ?
If not, how do we generate the sidecar WebVTT file?
We have implemented the code as below with Media file being accessed from AMS link and a locally downloaded transcript.vtt file, but it fails.
[EDITED : 20200413]
However, when we have local media file and local transcript.vtt file, or when we directly access the media file in the media storage account (https://mediastorageaccount.blob.core.windows.net/container/file.mp4) it works fine. But, when we access the encoded file from the link generated by AMS Transform (https://mediaservice-inct.streaming.media.azure.net/788888-6666-4444-aaaa-823422j218/file.ism/manifest(format=m3u8-cmaf)) it fails.
What is wrong here?
func playVideo()
{
let strUrl = "https://mediaservice-inct.streaming.media.azure.net/79510-6eb-340-a90-824218/German-FAST_Lesson-2-Dialog.ism/manifest(format=m3u8-cmaf)"
localVideoAsset = AVURLAsset(url: URL(string: strUrl)!)
//We have to add tracks in AVMutableComposition same like bellow
//First we have to add video track on AVMutableComposition
let videoTrack = videoPlusSubtitles.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
do{
guard localVideoAsset!.tracks.count > 0 else{
// error msg
return
}
try? videoTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: localVideoAsset!.duration),
of:localVideoAsset!.tracks(withMediaType: .video)[0],
at: seconds)
}
//After that we have to add subtitle track in AVMutableComposition
if isEnglishSubtitle {
setSubtitleTrack(subtitle: "transcript")
}else{
setSubtitleTrack(subtitle: "transcript_tr")
}
//After set the video track and subtitle track we have to set in the player same like bellow
player = AVPlayer(playerItem: AVPlayerItem(asset: videoPlusSubtitles))
playerLayer.removeFromSuperlayer()
playerLayer = AVPlayerLayer(player: player)
playerLayer.frame = self.videoView.bounds
playerLayer.videoGravity = .resizeAspect
self.videoView.layer.addSublayer(playerLayer)
player.play()
}
func setSubtitleTrack(subtitle : String){
print(subtitle)
print(seconds)
//Here we have to check if any pre track available. If available then we have to remove it same like bellow
if subtitleTrack != nil{
videoPlusSubtitles.removeTrack(subtitleTrack!)
}
//We have to get subtitle file from path same like bellow
let subtitleAsset = AVURLAsset(url: Bundle.main.url(forResource: subtitle, withExtension: ".vtt")!)
// And we have to add new track from here
subtitleTrack = videoPlusSubtitles.addMutableTrack(withMediaType: .text, preferredTrackID: kCMPersistentTrackID_Invalid)
do{
guard subtitleAsset.tracks.count > 0 else{
//error msg
return
}
try? subtitleTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: localVideoAsset!.duration),
of:subtitleAsset.tracks(withMediaType: .text)[0],
at: seconds)
}
}
I suspect the issue is not caused by the AMS stream. To double check, you may want to try using another stream HLS(e.g. try a HLS provided by Apple). Apple has specific requirements for playing VTT in AVPlayer. I've included an Apple doc link which has a lot of examples on streaming, and other links that may be helpful:
https://developer.apple.com/streaming/examples/
How to add external .vtt subtitle file to AVPlayerViewController in tvOS
AVUrlAsset and WebVTTs
Here is a link to a GIF of the problem:
https://gifyu.com/images/ScreenRecording2017-01-25at02.20PM.gif
I'm taking a PHAsset from the camera roll, adding it to a mutable composition, adding another video track, manipulating that added track, and then exporting it through AVAssetExportSession. The result is a quicktime file with .mov file extension saved in the NSTemporaryDirectory():
guard let exporter = AVAssetExportSession(asset: mergedComposition, presetName: AVAssetExportPresetHighestQuality) else {
fatalError()
}
exporter.outputURL = temporaryUrl
exporter.outputFileType = AVFileTypeQuickTimeMovie
exporter.shouldOptimizeForNetworkUse = true
exporter.videoComposition = videoContainer
// Export the new video
delegate?.mergeDidStartExport(session: exporter)
exporter.exportAsynchronously() { [weak self] in
DispatchQueue.main.async {
self?.exportDidFinish(session: exporter)
}
}
I then take this exported file and load it into a mapper object that applies 'slow motion' to the clip based on some time mappings given to it. The result here is an AVComposition:
func compose() -> AVComposition {
let composition = AVMutableComposition(urlAssetInitializationOptions: [AVURLAssetPreferPreciseDurationAndTimingKey: true])
let emptyTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
let audioTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid)
let asset = AVAsset(url: url)
guard let videoAssetTrack = asset.tracks(withMediaType: AVMediaTypeVideo).first else { return composition }
var segments: [AVCompositionTrackSegment] = []
for map in timeMappings {
let segment = AVCompositionTrackSegment(url: url, trackID: kCMPersistentTrackID_Invalid, sourceTimeRange: map.source, targetTimeRange: map.target)
segments.append(segment)
}
emptyTrack.preferredTransform = videoAssetTrack.preferredTransform
emptyTrack.segments = segments
if let _ = asset.tracks(withMediaType: AVMediaTypeVideo).first {
audioTrack.segments = segments
}
return composition.copy() as! AVComposition
}
Then I load this file as well as the original file which has also been mapped to slowmo into AVPlayerItems to play in a AVPlayers which is connected to a AVPlayerLayers in my app:
let firstItem = AVPlayerItem(asset: originalAsset)
let player1 = AVPlayer(playerItem: firstItem)
firstItem.audioTimePitchAlgorithm = AVAudioTimePitchAlgorithmVarispeed
player1.actionAtItemEnd = .none
firstPlayer.player = player1
// set up player 2
let secondItem = AVPlayerItem(asset: renderedVideo)
secondItem.seekingWaitsForVideoCompositionRendering = true //tried false as well
secondItem.audioTimePitchAlgorithm = AVAudioTimePitchAlgorithmVarispeed
secondItem.videoComposition = nil // tried AVComposition(propertiesOf: renderedVideo) as well
let player2 = AVPlayer(playerItem: secondItem)
player2.actionAtItemEnd = .none
secondPlayer.player = player2
I then have a start and end time to loop through these videos over and over. I don't use PlayerItemDidReachEnd because i'm not interested in the end, I'm interested in the user inputed time. I even use dispatchGroup to ENSURE that both players have finished seeking before trying to replay the video:
func playAllPlayersFromStart() {
let dispatchGroup = DispatchGroup()
dispatchGroup.enter()
firstPlayer.player?.currentItem?.seek(to: startTime, toleranceBefore: kCMTimeZero, toleranceAfter: kCMTimeZero, completionHandler: { _ in
dispatchGroup.leave()
})
DispatchQueue.global().async { [weak self] in
guard let startTime = self?.startTime else { return }
dispatchGroup.wait()
dispatchGroup.enter()
self?.secondPlayer.player?.currentItem?.seek(to: startTime, toleranceBefore: kCMTimeZero, toleranceAfter: kCMTimeZero, completionHandler: { _ in
dispatchGroup.leave()
})
dispatchGroup.wait()
DispatchQueue.main.async { [weak self] in
self?.firstPlayer.player?.play()
self?.secondPlayer.player?.play()
}
}
}
The strange part here is that the original asset, which has also been mapped via my compose() function loops perfectly fine. However, the renderedVideo which has also been run through the compose() function sometimes freezes when seeking during one of the CMTimeMapping segments. The only difference between the file that freezes and the file that doesnt freeze is that one has been exported to the NSTemporaryDirectory via the AVAssetExportSession to combine the two video tracks into one. They're both the same duration. I'm also sure that it's only the video layer that is freezing and not the audio, because if I add BoundaryTimeObservers to the player that freezes it still hits them and loops. Also the audio loops properly.
To me the strangest part is that the video 'resumes' if it makes it past the spot where it paused to start the seek after a 'freeze'. I've been stuck on this for days and would really love some guidance.
Other odd things to note:
- Even though the CMTimeMapping of the original versus the exported asset are the exact same durations, you'll notice that the rendered asset's slow motion ramp is more 'choppy' than the original.
- Audio continues when video freezes.
- video almost only ever freezes during slow motion sections (caused by CMTimeMapping objects based on segments
- rendered video seems to have to play 'catch up' at the beginning. even though i'm calling play after both have finished seeking, it seems to me that the right side plays faster in the beginning as a catch up. Strange part is that the segments are the exact same, just referencing two separate source files. One located in the asset library, the other in NSTemporaryDirectory
- It seems to me that AVPlayer and AVPlayerItemStatus is 'readyToPlay' before i call play.
- It seems to 'unfreeze' if the player proceeds PAST the point that it locked up.
- I tried to add observers for 'AVPlayerItemPlaybackDidStall' but it was never called.
Cheers!
The problem was in the AVAssetExportSession. To my surprise, changing exporter.canPerformMultiplePassesOverSourceMediaData = true fixed the issue. Although the documentation is quite sparse and even claims 'setting this property to true may have no effect', it did seem to fix the issue. Very, very, very strange! I consider this a bug and will be filing a radar. Here are the docs on the property: canPerformMultiplePassesOverSourceMediaData
It seems possible that in your playAllPlayersFromStart() method that the startTime variable may have changed between the two tasks dispatched (this would be especially likely if that value updates based on scrubbing).
If you make a local copy of startTime at the start of the function, and then use it in both blocks, you may have better luck.
I am combining two audio files into one. I set up two sliders to change volume of each audio file. When i try to do preferredVolume for an AVAssetTrack i get this (#lvalue Float) -> $T5 is not identical to float. Are there any other ways to accomplish this? Code is in swift but i dont mind if answer is in objective c.
EDIT: How can i change the volume of each audio file with a slider or with a float?
Code:
let type = AVMediaTypeAudio
let asset1 = AVURLAsset(URL: beatLocationURL, options: nil)
let arr2 = asset1.tracksWithMediaType(type)
let track2 = arr2.last as AVAssetTrack
track2.preferredVolume(beatVolume.value) <--where error occurs
let duration : CMTime = track2.timeRange.duration
let comp = AVMutableComposition()
let comptrack = comp.addMutableTrackWithMediaType(type,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
comptrack.insertTimeRange(CMTimeRangeMake(CMTimeMakeWithSeconds(0,600), CMTimeMakeWithSeconds(5,600)), ofTrack:track2, atTime:CMTimeMakeWithSeconds(0,600), error:nil)
comptrack.insertTimeRange(CMTimeRangeMake(CMTimeSubtract(duration, CMTimeMakeWithSeconds(5,600)), CMTimeMakeWithSeconds(5,600)), ofTrack:track2, atTime:CMTimeMakeWithSeconds(5,600), error:nil)
let type3 = AVMediaTypeAudio
let asset = AVURLAsset(URL: vocalURL, options:nil)
let arr3 = asset.tracksWithMediaType(type3)
let track3 = arr3.last as AVAssetTrack
let comptrack3 = comp.addMutableTrackWithMediaType(type3, preferredTrackID:Int32(kCMPersistentTrackID_Invalid))
comptrack3.insertTimeRange(CMTimeRangeMake(CMTimeMakeWithSeconds(0,600), CMTimeMakeWithSeconds(10,600)), ofTrack:track3, atTime:CMTimeMakeWithSeconds(0,600), error:nil)
let params = AVMutableAudioMixInputParameters(track:comptrack3)
params.setVolume(1, atTime:CMTimeMakeWithSeconds(0,600))
params.setVolumeRampFromStartVolume(1, toEndVolume:0, timeRange:CMTimeRangeMake(CMTimeMakeWithSeconds(7,600), CMTimeMakeWithSeconds(3,600)))
let mix = AVMutableAudioMix()
mix.inputParameters = [params]
let item = AVPlayerItem(asset:comp)
item.audioMix = mix
You can't say this:
track2.preferredVolume(beatVolume.value)
If you want to set track2.preferredVolume, then say:
track2.preferredVolume = beatVolume.value
Of course, that will only work if beatVolume.value is a Float. If it isn't, you will have to make a Float out of it somehow.
Also, you won't be able to set the preferredVolume of track2, because it's an AVAssetTrack. An AVAssetTrack's preferredVolume isn't settable. What you want to do is wait until you're setting up your AVMutableComposition and set the volumes on its tracks. For example:
comptrack3.preferredVolume = 0.5
That will compile, and now you can figure out how to substitute some other Float as the real value. (It will not, however, change the volume of one track relative to another. If that's your goal, use an AVMutableAudioMix. See, for example, Apple's sample code here: https://developer.apple.com/library/ios/qa/qa1716/_index.html.)
After loading an AVAsset like this:
AVAsset *asset = [AVAsset assetWithURL:url];
I want to know what the Sampling Rate is of the Audio track. Currently, I am getting the Audio Track like this:
AVAssetTrack *audioTrack = [[asset tracksWithMediaCharacteristic:AVMediaCharacteristicAudible] objectAtIndex:0];
Which works. But I can't seem to find any kind of property, not even after using Google ;-) , that gives me the sampling rate. How does this work normally ? Is it even possible ? (I start doubting more and more, because Googling is not giving me a lot of information ...)
let asset = AVAsset(url: URL(fileURLWithPath: "asset/p2a2.aif"))
let track = asset.tracks[0]
let desc = track.formatDescriptions[0] as! CMAudioFormatDescription
let basic = CMAudioFormatDescriptionGetStreamBasicDescription(desc)
print(basic?.pointee.mSampleRate)
I'm using Swift so it looks a bit different but it should still work with Obj-C.
print(track.naturalTimeScale)
Also seems to give the correct answer but I'm a bit apprehensive because of the name.
Using Swift and AVFoundation :
let url = Bundle.main.url(forResource: "audio", withExtension: "m4a")!
let asset = AVAsset(url: url)
if let firstTrack = asset.tracks.first {
print("bitrate: \(firstTrack.estimatedDataRate)")
}
To find more information in your metadata, you can also consult:
https://developer.apple.com/documentation/avfoundation/avassettrack
https://developer.apple.com/documentation/avfoundation/media_assets_playback_and_editing/finding_metadata_values
Found it. I was using the MTAudioProcessingTap, so in the prepare() function I could just use:
void prepare(MTAudioProcessingTapRef tap, CMItemCount maxFrames, const AudioStreamBasicDescription *processingFormat)
{
sampleRate = processingFormat->mSampleRate;
NSLog(#"Preparing the Audio Tap Processor");
}
I am using a subclass of AVQueuePlayer and when I add new AVPlayerItem with a streaming URL the app freezes for about a second or two. By freezing I mean that it doesn't respond to touches on the UI. Also, if I have a song playing already and then add another one to the queue, AVQueuePlayer automatically starts preloading the song while it is still streaming the first one. This makes the app not respond to touches on the UI for two seconds just like when adding the first song but the song is still playing. So that means AVQueuePlayer is doing something in main thread that is causing the apparent "freeze".
I am using insertItem:afterItem: to add my AVPlayerItem. I tested and made sure that this was the method that was causing the delay. Maybe it could be something that AVPlayerItem does when it gets activated by AVQueuePlayer at the moment of adding it to the queue.
Must point out that I am using the Dropbox API v1 beta to get the streaming URL by using this method call:
[[self restClient] loadStreamableURLForFile:metadata.path];
Then when I receive the stream URL I send it to AVQueuePlayer as follows:
[self.player insertItem:[AVPlayerItem playerItemWithURL:url] afterItem:nil];
So my question is: How do I avoid this?
Should I do the preloading of an audio stream on my own without the help of AVPlayer? If so, how do I do this?
Thanks.
Don't use playerItemWithURL it's sync.
When you receive the response with the url try this:
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:url options:nil];
NSArray *keys = #[#"playable"];
[asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() {
[self.player insertItem:[AVPlayerItem playerItemWithAsset:asset] afterItem:nil];
}];
Bump, since this is a highly rated question and similar questions online either has outdated answers or aren't great. The whole idea is pretty straight forward with AVKit and AVFoundation, which means no more depending on third party libraries. The only issue is that it took some tinkering around and put the pieces together.
AVFoundation's Player() initialization with url is apparently not thread safe, or rather it's not meant to be. Which means, no matter how you initialize it in a background thread the player attributes are going to be loaded in the main queue causing freezes in the UI especially in UITableViews and UICollectionViews. To solve this issue Apple provided AVAsset which takes a URL and assists in loading the media attributes like track, playback, duration etc. and can do so asynchronously, with a best part being that this loading process is cancellable (unlike other Dispatch queue background threads where ending a task may not be that straight forward). This means, there is no need to worry about lingering zombie threads in the background as you scroll fast on a table view or collection view, ultimately piling up on the memory with a whole bunch of unused objects. This cancellable feature is great, and allows us to cancel any lingering AVAsset async load if it is in progress but only during cell dequeue. The async loading process can be invoked by the loadValuesAsynchronously method, and can be cancelled (at will) at any later time (if still in progress).
Don't forget to exception handle properly using the results of loadValuesAsynchronously. In Swift (3/4), here's how you would would load a video asynchronously and handle situations if the async process fails (due to slow networks, etc.)-
TL;DR
TO PLAY A VIDEO
let asset = AVAsset(url: URL(string: self.YOUR_URL_STRING))
let keys: [String] = ["playable"]
var player: AVPlayer!
asset.loadValuesAsynchronously(forKeys: keys, completionHandler: {
var error: NSError? = nil
let status = asset.statusOfValue(forKey: "playable", error: &error)
switch status {
case .loaded:
DispatchQueue.main.async {
let item = AVPlayerItem(asset: asset)
self.player = AVPlayer(playerItem: item)
let playerLayer = AVPlayerLayer(player: self.player)
playerLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
playerLayer.frame = self.YOUR_VIDEOS_UIVIEW.bounds
self.YOUR_VIDEOS_UIVIEW.layer.addSublayer(playerLayer)
self.player.isMuted = true
self.player.play()
}
break
case .failed:
DispatchQueue.main.async {
//do something, show alert, put a placeholder image etc.
}
break
case .cancelled:
DispatchQueue.main.async {
//do something, show alert, put a placeholder image etc.
}
break
default:
break
}
})
NOTE:
Based on what your app wants to achieve you may still have to do some amount of tinkering to tune it to get smoother scroll in a UITableView or UICollectionView. You may also need to implement some amount of KVO on the AVPlayerItem properties for it to work and there's plenty of posts here in SO that discuss AVPlayerItem KVOs in detail.
TO LOOP THROUGH ASSETS (video loops/GIFs)
To loop a video, you can use the same method above and introducing AVPlayerLooper. Here's a sample code to loop a video (or perhaps a short video in GIF style). Note the use of duration key which is required for our video loop.
let asset = AVAsset(url: URL(string: self.YOUR_URL_STRING))
let keys: [String] = ["playable","duration"]
var player: AVPlayer!
var playerLooper: AVPlayerLooper!
asset.loadValuesAsynchronously(forKeys: keys, completionHandler: {
var error: NSError? = nil
let status = asset.statusOfValue(forKey: "duration", error: &error)
switch status {
case .loaded:
DispatchQueue.main.async {
let playerItem = AVPlayerItem(asset: asset)
self.player = AVQueuePlayer()
let playerLayer = AVPlayerLayer(player: self.player)
//define Timerange for the loop using asset.duration
let duration = playerItem.asset.duration
let start = CMTime(seconds: duration.seconds * 0, preferredTimescale: duration.timescale)
let end = CMTime(seconds: duration.seconds * 1, preferredTimescale: duration.timescale)
let timeRange = CMTimeRange(start: start, end: end)
self.playerLooper = AVPlayerLooper(player: self.player as! AVQueuePlayer, templateItem: playerItem, timeRange: timeRange)
playerLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
playerLayer.frame = self.YOUR_VIDEOS_UIVIEW.bounds
self.YOUR_VIDEOS_UIVIEW.layer.addSublayer(playerLayer)
self.player.isMuted = true
self.player.play()
}
break
case .failed:
DispatchQueue.main.async {
//do something, show alert, put a placeholder image etc.
}
break
case .cancelled:
DispatchQueue.main.async {
//do something, show alert, put a placeholder image etc.
}
break
default:
break
}
})
EDIT : As per the documentation, AVPlayerLooper requires the duration property of the asset to be fully loaded in order to be able to loop through videos. Also, the timeRange: timeRange with the start and end timerange in the AVPlayerLooper initialization is really optional if you want an infinite loop. I have also realized since I posted this answer that AVPlayerLooper is only about 70-80% accurate in looping videos, especially if your AVAsset needs to stream the video from a URL. In order to solve this issue there is a totally different (yet simple) approach to loop a video-
//this will loop the video since this is a Gif
let interval = CMTime(value: 1, timescale: 2)
self.timeObserverToken = self.player?.addPeriodicTimeObserver(forInterval: interval, queue: DispatchQueue.main, using: { (progressTime) in
if let totalDuration = self.player?.currentItem?.duration{
if progressTime == totalDuration{
self.player?.seek(to: kCMTimeZero)
self.player?.play()
}
}
})
Gigisommo's answer for Swift 3 including the feedback from the comments:
let asset = AVAsset(url: url)
let keys: [String] = ["playable"]
asset.loadValuesAsynchronously(forKeys: keys) {
DispatchQueue.main.async {
let item = AVPlayerItem(asset: asset)
self.playerCtrl.player = AVPlayer(playerItem: item)
}
}