I'm running into some strange issues with AVQueuePlayer. I'll first explain the purpose of my implementation, then go over the issues I am hitting.
Running: iPhone5s, iOS 10.1.1, LTE connection
Video: Progressively downloading a video. .mp4, 5mb, 4 second duration.
The purpose of the implementation is to play a progressively downloaded video that loops seamlessly. The video won't contain any sounds.
I'm using an AVQueuePlayer (supporting iOS 9 and up) to loop videos. I set it up the same way Apple recommended. Which is to listen for the current player item to change and then move the player item to the end of the queue.
https://developer.apple.com/library/content/samplecode/avloopplayer/Listings/Projects_VideoLooper_VideoLooper_QueuePlayerLooper_swift.html#//apple_ref/doc/uid/TP40014695-Projects_VideoLooper_VideoLooper_QueuePlayerLooper_swift-DontLinkElementID_11
I am hitting 2 issues.
Issue 1: My designer gave me a video that contains a video track and an audio track. Durations are the same. I am able to track the buffer progress by checking the current player item loadedTimeRanges. However, when the video loops, it isn't seamless. So we tried a video without the audio track and hit Issue 2.
Issue 2: Testing the same video, but this video contains only a video track. The video loops amazingly. It's seamless. However, when checking the loadedTimeRanges to track the buffer progress, the duration will remain 0 until the video has completely loaded. Then the duration will report the total duration of the video.
Is Issue2 a bug? I also find it strange that removing the audio track creates a much more seamless loop.
I've provided my code below that is used to check seconds buffered. Note that it will return a duration of 0 if the playerItem.loadedTimeRanges.first.timeRangeValue doesn't exist. I can confirm that value does exist and the duration is properly returned when testing both issues.
public var secondsBuffered: Float64 {
if let playerItem = self.player?.currentItem {
if let loadedTimeRange = playerItem.loadedTimeRanges.first?.timeRangeValue {
let duration: Float64 = CMTimeGetSeconds(loadedTimeRange.duration)
return duration
}
}
return 0
}
Related
WWDC2012's Real-Time Media Effects and Processing during Playback Session explains how to synchronize an AVPlayer with custom audio. Paraphrasing, you
start playback of the AVPlayer and custom audio at the same time
play both at the same rate
The first part is achieved by priming the AVPlayer with prerollAtRate:completionHandler:, so playback can be started with "minimal latency", and the second part by making the AVPlayer use The iOS Audio Clock.
The code snippet assumes you have calculated the future host time that you anticipate audio hitting the speaker (taken literally this last phrase seems to imply supreme omniscience, so let's just read it as your [desired] audio start hosttime).
CMClockRef audioClock = NULL;
OSStatus err = CMAudioClockCreate(kCFAllocatorDefault, &audioClock);
if (err == noErr) {
[myPlayer setMasterClock:audioClock];
[myPlayer prerollAtRate:1.0 completionHandler:^(BOOL finished){
if (finished) {
// Calculate future host time here
[myPlayer setRate:1.0 time:newItemTime atHostTime:hostTime];
} else {
// Preroll interrupted or cancelled
}
}];
}
It's a tiny amount of code, yet it raises so many questions. What happens if the preroll currentTime and newItemTime don't agree? Don't video and audio play at the same rate of one second per second? So shouldn't their clocks be the same? Doesn't 48kHz divide 60fps? How can the code only need to know the desired start time and no other details of my audio code? Is it due to the one iOS Audio Clock? Is this API ingenious or an awful non-orthogonal mish-mash that won't compose well with other AVFoundation features?
Despite my doubts, the code seems to work, but I want to seamlessly loop the video and custom audio. The question is how?
I can't preroll the playing AVPlayer because that happens from currentTime (and the player wouldn't appreciate having its buffers changed while playing). Maybe an alternating pair of prerolled AVPlayers? AVPlayerLooper sounds promising. It's not actually an AVPlayer, but it wraps an AVQueuePlayer (which is). Assuming preroll works on the AVQueuePlayer and I pay extra special attention to looping the custom audio, then this may work. Otherwise I think the only remaining option is to drop the prerolling and shoehorn the video and custom audio into an audio tap within an AVComposition, which would be looped with the help of an AVPlayerLooper.
I am attempting to stitch together video assets using AVComposition based on the code here:
https://developer.apple.com/library/mac/samplecode/AVCompositionDebugViewer/Introduction/Intro.html
On OSX it works perfectly, however on iOS when playing back via AVPlayer it only works with 1 or 2 input clips. If I attempt to add a third there will be nothing played back on the AVPlayerLayer. Wierdly if I observer the AVPlayer playback time using addPeriodicTimeObserverForInterval the video appears to be playing for the correct duration, but nothing plays back on the layer.
Does anyone have any insight into why this would be?
Turns out I was creating CMTime objects with differing timeScale values, which was causing rounding errors and creating gaps in my tracks. If a track had a gap then it would just fail to play. Ensuring that all my CMTime objects had the same timeScale made everything work perfectly.
I am trying to find a solution for a problem I got. I have 5 UIViews, which are all at the same position. Each UIView holds an AVPlayer with different videos. To be more precise they are all the same video, but encoded with different playback speed.
Video 1x speed
Video 4x speed
Video 8x speed
Video 16x speed
Video 24x speed
By default the video 1 is visible and playing, but I should be able to switch between the videos, but the switching shouldn't be visible for the user, therefore, I need to keep them synchronized. So If I am watching video 1 and switch to video 2, then video 2 should play exactly at the position, where video 1 stopped.
The sense is that it should look like, that the video is speeding up after an action, eg. a flick gesture.
I hope I described my issue good enough and I am very thankful for any suggestion.
I am using in the end an observer, which takes snapshots of the currentTime each 5 seconds, and calls all the other AVPLayer with the seekToTime method. This works fine to keep them synchronized. I just needed to adapt the CMTime for each player with different speed. As an example I got here 4x video:
videoPosition = player1.currentTime; //gets the video duration
videoPositionInSeconds = (Float64) videoPosition.value/videoPosition.timescale; //transfers the CMTime duration into seconds
[player2 seekToTime: CMTimeMakeWithSeconds(videoPositionInSeconds/4.0, player1.currentItem.asset.duration.timescale) toleranceBefore: kCMTimeZero toleranceAfter: kCMTimeZero];
Hope this helps.
I am trying to play a video on iOS using AVPlayer that is encoded with zencoder. The problem I am seeing is that the duration that the player item reports is rounded / imprecise. For example, the video duration might be 173.134 and the player item will report it as a flat 174.0. This causes problems with detecting loaded percentage and other related things. If I try to use the video without encoding everything is reported correctly and precisely.
Has anyone else ever seen this or have a solution?
I had the same problem. I just compare difference between current position and item duration and 1 second:
- (void)playing:(CMTime)time
{
CMTime itemDuration = _player.currentItem.asset.duration;
NSTimeInterval currentTime = CMTimeGetSeconds(time);
NSTimeInterval duration = CMTimeGetSeconds(itemDuration);
if (fabs(currentTime - duration) < 1)
// This is the end.
}
The problem turned out to be a problem with the source video / zencoder. The audio track was a slightly different length than the video which caused problems with the encoding. Cutting off the last second of the video so that the track durations would match up fixed the problem.
I have an AVMutableComposition with 2 audio tracks and one video track. I'm using the composition to string about 40 different video clips from .mov files, putting the video content of each clip in the video track of my composition and the audio in the audio track. The second audio track I use for music.
I also have a synchronized layer for titles graphics.
When I play this composition using an AVPlayer, the audio slowly gets out of sync. It takes about 4 minutes to start becoming noticeable. It seems like if I only string together a handfull of longer clips the problem is not as apparent, it is when there are many clips shorter clips (~40 in my test) that it gets really bad.
Pausing and Playing doesn't re-sync the audio, however seeking does. In other words, if I let the video play to the end, towards the end the lip sync gets noticeably off even if I pause and play throughout, however, if I seek to a time towards the end the audio gets back in sync.
My hacky solution for now is to seek to the currentTime + 1 frame every minute or so. This creates an unpleasant jump in the video caused by a lag in the seek operation, so not a good solution.
Exporting with an ExportSession doesn't present this problem, audio remains in sync in the output movie.
I'm wondering if the new masterClock property in the AVPlayer is the answer to this, and if it is, how is it used?
I had the same issue and fixed it, among many other audio and video things, by specifying times timescales in the following manner:
CMTime(seconds: my_seconds, preferredTimescale: CMTimeScale(600))
Before, my time scale was CMTimeScale(NSEC_PER_SEC). That caused me jittery when composing clips at a different frame rate, plus this audio out-of-sync that Eddy mentions here.
In spite of looking like a magic number, 600 is a common multiple of 24, 30, 60 and 120. These are usual frame rates for different purposes. The common multiple avoids dragging around rounding problems when composing multiple clips.