Playing an AVMutableComposition with AVPlayer audio gets out of sync - ios

I have an AVMutableComposition with 2 audio tracks and one video track. I'm using the composition to string about 40 different video clips from .mov files, putting the video content of each clip in the video track of my composition and the audio in the audio track. The second audio track I use for music.
I also have a synchronized layer for titles graphics.
When I play this composition using an AVPlayer, the audio slowly gets out of sync. It takes about 4 minutes to start becoming noticeable. It seems like if I only string together a handfull of longer clips the problem is not as apparent, it is when there are many clips shorter clips (~40 in my test) that it gets really bad.
Pausing and Playing doesn't re-sync the audio, however seeking does. In other words, if I let the video play to the end, towards the end the lip sync gets noticeably off even if I pause and play throughout, however, if I seek to a time towards the end the audio gets back in sync.
My hacky solution for now is to seek to the currentTime + 1 frame every minute or so. This creates an unpleasant jump in the video caused by a lag in the seek operation, so not a good solution.
Exporting with an ExportSession doesn't present this problem, audio remains in sync in the output movie.
I'm wondering if the new masterClock property in the AVPlayer is the answer to this, and if it is, how is it used?

I had the same issue and fixed it, among many other audio and video things, by specifying times timescales in the following manner:
CMTime(seconds: my_seconds, preferredTimescale: CMTimeScale(600))
Before, my time scale was CMTimeScale(NSEC_PER_SEC). That caused me jittery when composing clips at a different frame rate, plus this audio out-of-sync that Eddy mentions here.
In spite of looking like a magic number, 600 is a common multiple of 24, 30, 60 and 120. These are usual frame rates for different purposes. The common multiple avoids dragging around rounding problems when composing multiple clips.

Related

Performance issues with AVMutableComposition - scaleTimeRange

I am using scaleTimeRange:toDuration: to produce a fast-motion effect of upto 10x the original video speed.But I noticed that videos start to stutter when played through an AVPlayer at 10x.
I also noticed that on OSX's QuickTime the same composition plays smoothly.
Another question states that the reason for this is hardware limitation , but I want to know if there is a way around this , so that the fast motion effect occurs smoothly over the length of the entire video.
Video Specs
Format : H.264 , 1280x544
FPS : 25
Data Size : 26MB
Data Rate : 1.17 Mbit/s
I have a feeling that by playing your videos at 10x using scaleTimeRange:toDuration simply has the effect of multiplying your data rate by 10, bringing it up to 10Mbit/s, which osx machines can handle, but iOS devices cannot.
In other words, you're creating videos that need to play back at 300 frames per second, which is pushing AVPlayer too hard.
If I didn't know about your other question, I would have said that the solution is to export your AVComposition using AVAssetExportSession, which should result in your high FPS video being down sampled to an easier to handle 30fps, and then play that with AVPlayer.
If AVAssetExportSession isn't working, you could try applying the speedup effect yourself, by reading the frames from the source video using AVAssetReader and writing every tenth frame to the output file using AVAssetWriter (don't forget to set the correct presentation timestamps).

AVPlayer does not play back AVComposition with more than 2 clips

I am attempting to stitch together video assets using AVComposition based on the code here:
https://developer.apple.com/library/mac/samplecode/AVCompositionDebugViewer/Introduction/Intro.html
On OSX it works perfectly, however on iOS when playing back via AVPlayer it only works with 1 or 2 input clips. If I attempt to add a third there will be nothing played back on the AVPlayerLayer. Wierdly if I observer the AVPlayer playback time using addPeriodicTimeObserverForInterval the video appears to be playing for the correct duration, but nothing plays back on the layer.
Does anyone have any insight into why this would be?
Turns out I was creating CMTime objects with differing timeScale values, which was causing rounding errors and creating gaps in my tracks. If a track had a gap then it would just fail to play. Ensuring that all my CMTime objects had the same timeScale made everything work perfectly.

iOS/AVFoundation: How to eliminate (occasional) blank frames between different videos within an AVComposition during playback

The app I’m working on loops a video a specified # of times by adding the same AVAssetTrack (created from the original video url) multiple times to the same AVComposition at successive intervals. The app similarly inserts a new video clip into an existing composition by 'removing' the time range from the composition's AVMutableCompositionTrack (for AVMediaTypeVideo) and inserting the new clip's AVAssetTrack into the previously removed time range.
However, occasionally and somewhat rarely, after inserting a new clip as described above into a time range within a repeat of the original looping video, there are resulting blank frames which only appear at the video loop’s transition points (within the composition), but only during playback - the video exports correctly without gaps.
This leads me to believe the issue is with the AVPlayer or AVPlayerItem and how the frames are currently buffered for playback, rather than how I'm inserting/ looping the clips or choosing the correct CMTime stamps to do so. The app is doing a bunch of things at once (loop visualization in the UI via an NSTimer, audio playback via Amazing Audio Engine) - could my issue be a result of competition for resources?
One more note: I understand that discrepancies between audio and video in an asset can cause glitches (i.e. the underlying audio is a little bit longer than the video length), but as I'm not adding an audioEncodingTarget to the GPUImageWriter that I'm using to record and save the video, the videos have no audio components.
Any thoughts or directions you can point me in would be greatly appreciated! Many thanks in advance.
Update: the flashes coincide with the "Had to drop a video frame" error logged by the GPUImage library, which according to the creator has to do with the phone not being able to process video fast enough. Can multi-threading solving this?
Update 2: So the flashes actually don't always correspond to the had to drop a video frame error. I have also disabled all of the AVRecorder/Amazing Audio Engine code and the issue still persists making it not a problem of resource competition between those engines. I have been logging properties of AVPlayer item and notice that the 'isPlayBackLikelyToKeepUp' which is always NO, and 'isPlaybackBufferFull' which is always yes.
So problem is solved - sort of frustrating how brutally simple the fix is. I just used a time range a frame shorter for adding the videos to the composition rather than the AVAssetTrack's time range. No more flashes. Hopefully the users won't miss that 30th of a second :)
shortened_duration = CMTimeSubtract(originalVideoAssetTrack.timeRange.duration, CMTimeMake(1,30));

seeking or scrubbing a video stream in reverse

I've created a scrubber in my app that allows the user to scrub forwards/backwards through a video via [AVPlayer seekToTime:toleranceBefore:toleranceAfter].
The video that is being scrubbed is captured via an AVCaptureSession that uses AVCaptureMovieFileOutput. I've ffprobed the resulting .MOV and the results are as expected (e.g., on my iPhone 5s I'm recording at 120fps at approx 23000 kb/s with approx 1 keyframe per second).
Since there is only approximately 1 keyframe per second, it is difficult to scrubber backward through the video with any precision and without any lag (since it would have to go back to the closest keyframe and the compute the frame at my current scrubbing position).
So I'm wondering if there is a better strategy for smooth scrubbing? There are apps out there that do this really well (e.g., I've examined the Coach's Eye app and it records video precisely the same way I do and yet its scrubbing performance is quite good).
I'd be very appreciative of any suggestions.

In AVFoundation, how to synchronize recording and playback

I am interested in recording media using an AVCaptureSession in iOS while playing media back using an AVPlayer (specifically, I am playing back audio and recording video, but I'm not sure it matters).
The problem is, when I play the resulting media back together later, they are out of sync. Is it possible to synchronize them, either by ensuring that playback and recording start simultaneously, or by discovering what the offset is between them? I probably need the sync to be on the order of 10 ms. It is unreasonable to assume that I can always capture audio (since the user may use headphones), so syncing via analysis of original and recorded audio is not an option.
This question suggests that it's possible to end playback and recording simultaneously and determine the initial offset from the resulting lengths that way, but I'm unclear how to get them to end simultaneously. I have two cases: 1) the audio playback runs out, and 2), the user hits the "stop recording" button.
This question suggests priming and then applying a fixed, but possibly device-dependent delay, which is obviously a hack, but if it's good enough for audio it's obviously worth considering for video.
Is there another media layer I can use to perform the required synchronization?
Related: this question is unanswered.
If you are specifically using AVPlayer to playback Audio and i would suggest you to use AudioQueueServices for the same. Its seamless and fast as it reads buffer by buffer and play pause is faster than AVPLlayer
There can also be the possibility that you are missing the initial statement of [avPlayer prepareToPlay] which might be causing much overhead for it to sync before playing the Audio.
Hope it helps you.

Resources