When I'm loading video from iCloud, I'd want a UIProgressView to be updated about the progress. For some reason, progressHandler doesn't get called. I've tried to do it like it's done in SamplePhotosApp (which loads images though), but can't get it working.
Here's the code I use for options and requesting the video:
let videoRequestOptions = PHVideoRequestOptions()
videoRequestOptions.deliveryMode = .FastFormat
videoRequestOptions.version = .Original
videoRequestOptions.networkAccessAllowed = true
videoRequestOptions.progressHandler = { (progress, error, stop, info) in
dispatch_async(dispatch_get_main_queue(), {
print(progress)
self.progressView.progress = Float(progress)
})
}
PHImageManager.defaultManager().requestPlayerItemForVideo(asset, options: videoRequestOptions, resultHandler: {
result, info in
self.videoPlayer = AVPlayer(playerItem: result!)
})
This code won't even print anything, so it looks like the progressHandler block won't be called at all. The video I'm requesting is in iCloud, because this will crash if networkAccessAllowed is set to false.
Where's the problem?
One solution is using PHImageManager's requestAVAssetForVideo instead of requestPlayerItemForVideo. After that you can create a new AVPlayerItem with the AVAsset you got as a result. This will work with the options that were used in the question.
I think this has something to do with the way videos are stored in iCloud, because they probably aren't AVPlayerItems in iCloud. This just an educated guess, and I don't have any facts about why the other method doesn't work properly.
Related
I'm trying to get the asset property from an AVAssetTrack object, but it's nil sometimes. It seems like the problem occurs only after I use Dispatch.main.async.
According to the documentation, it's necessary to use loadValuesAsynchronously(forKeys:, completion:) to avoid blocking the main thread, and return to the main thread after loading is done.
let asset = AVURLAsset(url: videoInAppBundleURL)
let track = asset.tracks(withMediaType: .video).first!
assert(track.asset != nil) // passes
track.loadValuesAsynchronously(forKeys: [#keyPath(AVAssetTrack.asset)]) {
assert(track.asset != nil) // passes
DispatchQueue.main.async {
assert(track.asset != nil) // FAILS
// [...]
}
}
What I found out is:
It makes no difference whether I'm running on a device or the
simulator.
It seems not to be a problem with the video / videoURL.
The video is part of the main bundle, I tried both .mp4 and .mov
files and I made sure the video works by displaying it via an
AVPlayerViewController.
Here is a working demo project.
I'm also wondering: why is AVAssetTrack's asset property optional? (all!! the other properties are non optional)
Note: this question has been edited after reading Matt's helpful comments and further investigation.
I reproduced the issue, with some tweaking of your github example, like this:
let asset = AVURLAsset(url: videoInAppBundleURL)
let tracksKey = #keyPath(AVAsset.tracks)
asset.loadValuesAsynchronously(forKeys: [tracksKey]) {
let track = asset.tracks(withMediaType: .video).first!
DispatchQueue.main.async {
assert(track.asset != nil) // fails
}
}
Okay, but now watch closely as I perform an amazing trick:
let asset = AVURLAsset(url: videoInAppBundleURL)
let tracksKey = #keyPath(AVAsset.tracks)
asset.loadValuesAsynchronously(forKeys: [tracksKey]) {
let track = asset.tracks(withMediaType: .video).first!
DispatchQueue.main.async {
print(asset) // <-- amazing trick
assert(track.asset != nil) // passes!
}
}
Whoa! All I did was add a print statement — and now suddenly the very same assertion passes. This in fact is parallel to your original statement (which you later edited out) that "Sometimes the problems are gone, when stepping through the code with the debugger.”
So, now, my suspicions being thoroughly aroused, I did something unbelievably clever (even if I do say so myself). I removed the print(asset), but I switched the scheme’s configuration from Debug to Release. Presto, the assertion still passes.
So what you’ve found is a quirk of the compiler — dare I call it a bug?
But wait, there’s more. You asked, quite reasonably, why asset is Optional. It’s because it’s weak:
weak open var asset: AVAsset? { get }
So there’s your answer. The track has only a weak reference to its asset. If we pass the track down into an asynchronous queue, and we do not bring the asset itself along with us, then the weak reference lets go and the asset is lost — in a Debug build.
Hope this helps. You are probably waiting for me to make some grand conclusory statement about whether this constitutes a bug, but I’m not going to, sorry. I’ve provided two workarounds (use a Release build, or deliberately carry the asset reference down into the async queue) and that’s as far as I can go.
Here is a link to a GIF of the problem:
https://gifyu.com/images/ScreenRecording2017-01-25at02.20PM.gif
I'm taking a PHAsset from the camera roll, adding it to a mutable composition, adding another video track, manipulating that added track, and then exporting it through AVAssetExportSession. The result is a quicktime file with .mov file extension saved in the NSTemporaryDirectory():
guard let exporter = AVAssetExportSession(asset: mergedComposition, presetName: AVAssetExportPresetHighestQuality) else {
fatalError()
}
exporter.outputURL = temporaryUrl
exporter.outputFileType = AVFileTypeQuickTimeMovie
exporter.shouldOptimizeForNetworkUse = true
exporter.videoComposition = videoContainer
// Export the new video
delegate?.mergeDidStartExport(session: exporter)
exporter.exportAsynchronously() { [weak self] in
DispatchQueue.main.async {
self?.exportDidFinish(session: exporter)
}
}
I then take this exported file and load it into a mapper object that applies 'slow motion' to the clip based on some time mappings given to it. The result here is an AVComposition:
func compose() -> AVComposition {
let composition = AVMutableComposition(urlAssetInitializationOptions: [AVURLAssetPreferPreciseDurationAndTimingKey: true])
let emptyTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
let audioTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid)
let asset = AVAsset(url: url)
guard let videoAssetTrack = asset.tracks(withMediaType: AVMediaTypeVideo).first else { return composition }
var segments: [AVCompositionTrackSegment] = []
for map in timeMappings {
let segment = AVCompositionTrackSegment(url: url, trackID: kCMPersistentTrackID_Invalid, sourceTimeRange: map.source, targetTimeRange: map.target)
segments.append(segment)
}
emptyTrack.preferredTransform = videoAssetTrack.preferredTransform
emptyTrack.segments = segments
if let _ = asset.tracks(withMediaType: AVMediaTypeVideo).first {
audioTrack.segments = segments
}
return composition.copy() as! AVComposition
}
Then I load this file as well as the original file which has also been mapped to slowmo into AVPlayerItems to play in a AVPlayers which is connected to a AVPlayerLayers in my app:
let firstItem = AVPlayerItem(asset: originalAsset)
let player1 = AVPlayer(playerItem: firstItem)
firstItem.audioTimePitchAlgorithm = AVAudioTimePitchAlgorithmVarispeed
player1.actionAtItemEnd = .none
firstPlayer.player = player1
// set up player 2
let secondItem = AVPlayerItem(asset: renderedVideo)
secondItem.seekingWaitsForVideoCompositionRendering = true //tried false as well
secondItem.audioTimePitchAlgorithm = AVAudioTimePitchAlgorithmVarispeed
secondItem.videoComposition = nil // tried AVComposition(propertiesOf: renderedVideo) as well
let player2 = AVPlayer(playerItem: secondItem)
player2.actionAtItemEnd = .none
secondPlayer.player = player2
I then have a start and end time to loop through these videos over and over. I don't use PlayerItemDidReachEnd because i'm not interested in the end, I'm interested in the user inputed time. I even use dispatchGroup to ENSURE that both players have finished seeking before trying to replay the video:
func playAllPlayersFromStart() {
let dispatchGroup = DispatchGroup()
dispatchGroup.enter()
firstPlayer.player?.currentItem?.seek(to: startTime, toleranceBefore: kCMTimeZero, toleranceAfter: kCMTimeZero, completionHandler: { _ in
dispatchGroup.leave()
})
DispatchQueue.global().async { [weak self] in
guard let startTime = self?.startTime else { return }
dispatchGroup.wait()
dispatchGroup.enter()
self?.secondPlayer.player?.currentItem?.seek(to: startTime, toleranceBefore: kCMTimeZero, toleranceAfter: kCMTimeZero, completionHandler: { _ in
dispatchGroup.leave()
})
dispatchGroup.wait()
DispatchQueue.main.async { [weak self] in
self?.firstPlayer.player?.play()
self?.secondPlayer.player?.play()
}
}
}
The strange part here is that the original asset, which has also been mapped via my compose() function loops perfectly fine. However, the renderedVideo which has also been run through the compose() function sometimes freezes when seeking during one of the CMTimeMapping segments. The only difference between the file that freezes and the file that doesnt freeze is that one has been exported to the NSTemporaryDirectory via the AVAssetExportSession to combine the two video tracks into one. They're both the same duration. I'm also sure that it's only the video layer that is freezing and not the audio, because if I add BoundaryTimeObservers to the player that freezes it still hits them and loops. Also the audio loops properly.
To me the strangest part is that the video 'resumes' if it makes it past the spot where it paused to start the seek after a 'freeze'. I've been stuck on this for days and would really love some guidance.
Other odd things to note:
- Even though the CMTimeMapping of the original versus the exported asset are the exact same durations, you'll notice that the rendered asset's slow motion ramp is more 'choppy' than the original.
- Audio continues when video freezes.
- video almost only ever freezes during slow motion sections (caused by CMTimeMapping objects based on segments
- rendered video seems to have to play 'catch up' at the beginning. even though i'm calling play after both have finished seeking, it seems to me that the right side plays faster in the beginning as a catch up. Strange part is that the segments are the exact same, just referencing two separate source files. One located in the asset library, the other in NSTemporaryDirectory
- It seems to me that AVPlayer and AVPlayerItemStatus is 'readyToPlay' before i call play.
- It seems to 'unfreeze' if the player proceeds PAST the point that it locked up.
- I tried to add observers for 'AVPlayerItemPlaybackDidStall' but it was never called.
Cheers!
The problem was in the AVAssetExportSession. To my surprise, changing exporter.canPerformMultiplePassesOverSourceMediaData = true fixed the issue. Although the documentation is quite sparse and even claims 'setting this property to true may have no effect', it did seem to fix the issue. Very, very, very strange! I consider this a bug and will be filing a radar. Here are the docs on the property: canPerformMultiplePassesOverSourceMediaData
It seems possible that in your playAllPlayersFromStart() method that the startTime variable may have changed between the two tasks dispatched (this would be especially likely if that value updates based on scrubbing).
If you make a local copy of startTime at the start of the function, and then use it in both blocks, you may have better luck.
I'm using Swift to show content from an AVPlayer in a view's AVPlayerLayer. The associated AVPlayerItem has a videoComposition, and slightly simplified version of the code to create it (without error checking, etc.) looks like this:
playerItem.videoComposition = AVVideoComposition(asset: someAsset, applyingCIFiltersWithHandler: {
[unowned self] (request: AVAsynchronousCIImageFilteringRequest) in
let paramDict = << set up parameter dictionary based on class vars >>
// filter the image
let filter = self.ciFilterWithParamDict(paramDict) {
filter.setValue(request.sourceImage, forKey: kCIInputImageKey)
if let filteredImage = filter.outputImage {
request.finishWithImage(filteredImage, context: nil)
}
})
This all works as expected when the AVPlayer is playing or seeking. And if a new videoComposition is created and loaded, the AVPlayerLayer is rendered correctly.
I have not found a way, however, to "trigger" the AVPlayer/ AVPlayerItem/ AVVideoComposition to re-render when I have changed some of the values that I use to calculate filter parameters. If I change values and then play or seek, it is rendered correctly, but only if I play or seek. Is there no way to trigger a rendering "in place"?
The best way that I know to do this is to simply create a new AVVideoComposition instance for the AVPlayerItem when editing your CIFilter inputs on a paused AVPlayer. In my experience it's way faster and cleaner than swapping player items out and back into the player. You might think that creating a new video composition is slow, but really all you are doing is redefining the render path at that specific frame, which is almost as efficient as invalidating the part of the Core Image cache that was impacted by your change.
The key here that the video composition of the player item must be invalidated in some way to trigger a redraw. Simply changing the input parameters of the Core Image filters have sadly no way (that i know of) of invalidating the video composition, which is the source of the issue.
You can get even more efficient by creating a AVMutableVideoComposition instance for the AVPlayerItem and mutate it in some ways (by changing things like instructions, animationTool, frameDuration) when editing on pause.
I used a hack to replace the avPlayerItem entirely to force a refresh. But it would be much better if there was a way to trigger the avPlayerItem to re-render directly.
// If the video is paused, force the player to re-render the frame.
if (self.avPlayer.currentItem.asset && self.avPlayer.rate == 0) {
CMTime currentTime = self.avPlayerItem.currentTime;
[self.avPlayer replaceCurrentItemWithPlayerItem:nil];
[self.avPlayer replaceCurrentItemWithPlayerItem:self.avPlayerItem];
[self.avPlayerItem seekToTime:currentTime toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero];
}
This one might help you.
let currentTime = self.player.current()
self.player.play()
self.player.pause()
self.player.seek(to: currentTime, toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero)
Along the lines of this answer, but instead of creating a brand new AVVideoComposition and setting that as the player's videoComposition, it appears you can force a refresh of the current frame by simply setting videoComposition to nil and then immediately back to the existing videoComposition instance.
This results in the following simple workaround any time you want to force refresh the current frame:
let videoComposition = player.currentItem?.videoComposition
player.currentItem?.videoComposition = nil
player.currentItem?.videoComposition = videoComposition
This one worked in my case:
let playerItem = player.currentItem!
pausePlayback()
playerItem.videoComposition = nil
playerItem.videoComposition = AVVideoComposition(asset: playerItem.asset) { request in
let source = request.sourceImage.clampedToExtent()
filter?.setValue(source, forKey: kCIInputImageKey) //Any CIFilter
request.finish(with: filter?.outputImage!, context: nil)
}
resumePlayback()
Has anyone figured out how to extract the video portion from a Live Photo? I'm working on an app to convert Live Photos into a GIF, and the first step is to get the video file from the Live Photo. It seems like it should be possible, because if you plug in your phone to a Mac you can see the separate image and video files. I've kinda run into a brick wall in the extraction process, and I've tried many ways to do it and they all fail.
The first thing I did was obtain a PHAsset for what I think is the video part of the Live Photo, by doing the following:
if let livePhoto = info["UIImagePickerControllerLivePhoto"] as? PHLivePhoto {
let assetResources = PHAssetResource.assetResourcesForLivePhoto(livePhoto)
for assetRes in assetResources {
if (assetRes.type == .PairedVideo) {
let assets = PHAsset.fetchAssetsWithLocalIdentifiers([assetRes.assetLocalIdentifier], options: nil)
if let asset = assets.firstObject as? PHAsset {
To convert the PHAsset to an AVAsset I've tried:
asset.requestContentEditingInputWithOptions(nil, completionHandler: { (contentEditingInput, info) -> Void in
if let url = contentEditingInput?.fullSizeImageURL {
let movieUrl = url.absoluteString + ".mov"
let avAsset = AVURLAsset(URL: NSURL(fileURLWithPath: movieUrl), options: nil)
debugPrint(avAsset)
debugPrint(avAsset.duration.value)
}
})
I don't think this one works because the debug print with the duration.value gives 0.
I've also tried without the ".mov" addition and it still doesn't work.
I also tried:
PHImageManager.defaultManager().requestAVAssetForVideo(asset, options: nil, resultHandler: { (avAsset, audioMix, info) -> Void in
debugPrint(avAsset)
})
And the debugPrint(avAsset) prints nil so it doesn't work.
I'm kind of afraid they might have made it impossible to do, it seems like I'm going in circles since it seems like the PHAsset I got is still a Live Photo and not actually a video.
Use the PHAssetResourceManager to get the video file from the PHAssetResource.
PHAssetResourceManager.defaultManager().writeDataForAssetResource(assetRes,
toFile: fileURL, options: nil, completionHandler:
{
// Video file has been written to path specified via fileURL
}
NOTE: The Live Photo specific APIs were introduced in iOS 9.1
// suppose you have PHAsset instance (you can get it via [PHAsset fetchAssetsWithOptions:...])
PHAssetResource *videoResource = nil;
NSArray *resourcesArray = [PHAssetResource assetResourcesForAsset:asset];
const NSInteger livePhotoAssetResourcesCount = 2;
const NSInteger videoPartIndex = 1;
if (resourcesArray.count == livePhotoAssetResourcesCount) {
videoResource = resourcesArray[videoPartIndex];
}
if (videoResource) {
NSString * const fileURLKey = #"_fileURL";
NSURL *videoURL = [videoResource valueForKey:fileURLKey];
// load video url using AVKit or AVFoundation
}
I accidentally did. I have an ios app called Goodreader (available in the appstore) which features a windows-like file manager. When importing a live photo, it will save it as a folder ending in .pvt containing the jpg and mov files in it. There is only one caveat: you need to open the live photo from within the messages app after you've sent it to yourself or somebody else to see the "import to goodreader" option, not from the photos app.
I am using a subclass of AVQueuePlayer and when I add new AVPlayerItem with a streaming URL the app freezes for about a second or two. By freezing I mean that it doesn't respond to touches on the UI. Also, if I have a song playing already and then add another one to the queue, AVQueuePlayer automatically starts preloading the song while it is still streaming the first one. This makes the app not respond to touches on the UI for two seconds just like when adding the first song but the song is still playing. So that means AVQueuePlayer is doing something in main thread that is causing the apparent "freeze".
I am using insertItem:afterItem: to add my AVPlayerItem. I tested and made sure that this was the method that was causing the delay. Maybe it could be something that AVPlayerItem does when it gets activated by AVQueuePlayer at the moment of adding it to the queue.
Must point out that I am using the Dropbox API v1 beta to get the streaming URL by using this method call:
[[self restClient] loadStreamableURLForFile:metadata.path];
Then when I receive the stream URL I send it to AVQueuePlayer as follows:
[self.player insertItem:[AVPlayerItem playerItemWithURL:url] afterItem:nil];
So my question is: How do I avoid this?
Should I do the preloading of an audio stream on my own without the help of AVPlayer? If so, how do I do this?
Thanks.
Don't use playerItemWithURL it's sync.
When you receive the response with the url try this:
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:url options:nil];
NSArray *keys = #[#"playable"];
[asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() {
[self.player insertItem:[AVPlayerItem playerItemWithAsset:asset] afterItem:nil];
}];
Bump, since this is a highly rated question and similar questions online either has outdated answers or aren't great. The whole idea is pretty straight forward with AVKit and AVFoundation, which means no more depending on third party libraries. The only issue is that it took some tinkering around and put the pieces together.
AVFoundation's Player() initialization with url is apparently not thread safe, or rather it's not meant to be. Which means, no matter how you initialize it in a background thread the player attributes are going to be loaded in the main queue causing freezes in the UI especially in UITableViews and UICollectionViews. To solve this issue Apple provided AVAsset which takes a URL and assists in loading the media attributes like track, playback, duration etc. and can do so asynchronously, with a best part being that this loading process is cancellable (unlike other Dispatch queue background threads where ending a task may not be that straight forward). This means, there is no need to worry about lingering zombie threads in the background as you scroll fast on a table view or collection view, ultimately piling up on the memory with a whole bunch of unused objects. This cancellable feature is great, and allows us to cancel any lingering AVAsset async load if it is in progress but only during cell dequeue. The async loading process can be invoked by the loadValuesAsynchronously method, and can be cancelled (at will) at any later time (if still in progress).
Don't forget to exception handle properly using the results of loadValuesAsynchronously. In Swift (3/4), here's how you would would load a video asynchronously and handle situations if the async process fails (due to slow networks, etc.)-
TL;DR
TO PLAY A VIDEO
let asset = AVAsset(url: URL(string: self.YOUR_URL_STRING))
let keys: [String] = ["playable"]
var player: AVPlayer!
asset.loadValuesAsynchronously(forKeys: keys, completionHandler: {
var error: NSError? = nil
let status = asset.statusOfValue(forKey: "playable", error: &error)
switch status {
case .loaded:
DispatchQueue.main.async {
let item = AVPlayerItem(asset: asset)
self.player = AVPlayer(playerItem: item)
let playerLayer = AVPlayerLayer(player: self.player)
playerLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
playerLayer.frame = self.YOUR_VIDEOS_UIVIEW.bounds
self.YOUR_VIDEOS_UIVIEW.layer.addSublayer(playerLayer)
self.player.isMuted = true
self.player.play()
}
break
case .failed:
DispatchQueue.main.async {
//do something, show alert, put a placeholder image etc.
}
break
case .cancelled:
DispatchQueue.main.async {
//do something, show alert, put a placeholder image etc.
}
break
default:
break
}
})
NOTE:
Based on what your app wants to achieve you may still have to do some amount of tinkering to tune it to get smoother scroll in a UITableView or UICollectionView. You may also need to implement some amount of KVO on the AVPlayerItem properties for it to work and there's plenty of posts here in SO that discuss AVPlayerItem KVOs in detail.
TO LOOP THROUGH ASSETS (video loops/GIFs)
To loop a video, you can use the same method above and introducing AVPlayerLooper. Here's a sample code to loop a video (or perhaps a short video in GIF style). Note the use of duration key which is required for our video loop.
let asset = AVAsset(url: URL(string: self.YOUR_URL_STRING))
let keys: [String] = ["playable","duration"]
var player: AVPlayer!
var playerLooper: AVPlayerLooper!
asset.loadValuesAsynchronously(forKeys: keys, completionHandler: {
var error: NSError? = nil
let status = asset.statusOfValue(forKey: "duration", error: &error)
switch status {
case .loaded:
DispatchQueue.main.async {
let playerItem = AVPlayerItem(asset: asset)
self.player = AVQueuePlayer()
let playerLayer = AVPlayerLayer(player: self.player)
//define Timerange for the loop using asset.duration
let duration = playerItem.asset.duration
let start = CMTime(seconds: duration.seconds * 0, preferredTimescale: duration.timescale)
let end = CMTime(seconds: duration.seconds * 1, preferredTimescale: duration.timescale)
let timeRange = CMTimeRange(start: start, end: end)
self.playerLooper = AVPlayerLooper(player: self.player as! AVQueuePlayer, templateItem: playerItem, timeRange: timeRange)
playerLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
playerLayer.frame = self.YOUR_VIDEOS_UIVIEW.bounds
self.YOUR_VIDEOS_UIVIEW.layer.addSublayer(playerLayer)
self.player.isMuted = true
self.player.play()
}
break
case .failed:
DispatchQueue.main.async {
//do something, show alert, put a placeholder image etc.
}
break
case .cancelled:
DispatchQueue.main.async {
//do something, show alert, put a placeholder image etc.
}
break
default:
break
}
})
EDIT : As per the documentation, AVPlayerLooper requires the duration property of the asset to be fully loaded in order to be able to loop through videos. Also, the timeRange: timeRange with the start and end timerange in the AVPlayerLooper initialization is really optional if you want an infinite loop. I have also realized since I posted this answer that AVPlayerLooper is only about 70-80% accurate in looping videos, especially if your AVAsset needs to stream the video from a URL. In order to solve this issue there is a totally different (yet simple) approach to loop a video-
//this will loop the video since this is a Gif
let interval = CMTime(value: 1, timescale: 2)
self.timeObserverToken = self.player?.addPeriodicTimeObserver(forInterval: interval, queue: DispatchQueue.main, using: { (progressTime) in
if let totalDuration = self.player?.currentItem?.duration{
if progressTime == totalDuration{
self.player?.seek(to: kCMTimeZero)
self.player?.play()
}
}
})
Gigisommo's answer for Swift 3 including the feedback from the comments:
let asset = AVAsset(url: url)
let keys: [String] = ["playable"]
asset.loadValuesAsynchronously(forKeys: keys) {
DispatchQueue.main.async {
let item = AVPlayerItem(asset: asset)
self.playerCtrl.player = AVPlayer(playerItem: item)
}
}