I have an array of [AVAsset], and I am trying to combine all of those Assets into a single Asset so that I can play back the video seamlessly (I tried using an AVQueuePlayer, but that does not play back the assets seamlessly).
Below is what I have so far, but when I try to play the final composition, it only plays the first track, even though it shows that it has all tracks and the total duration equals all of the tracks together.
Am I missing a step, even though it appears that all the tracks are in the composition? Perhaps I need to handle the AVPlayer differently if the AVPlayerItem has multiple tracks?
let playerLayer: AVPlayerLayer = AVPlayerLayer()
lazy var videoPlayer: AVPlayer = AVPlayer()
var videoClips = [AVAsset]()
let videoComposition = AVMutableComposition()
var playerItem: AVPlayerItem!
var lastTime: CMTime = kCMTimeZero
for clipIndex in videoClips {
let videoCompositionTrack = videoComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
do {
try videoCompositionTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipIndex.duration),
ofTrack: clipIndex.tracksWithMediaType(AVMediaTypeVideo)[0] ,
atTime: lastTime)
lastTime = CMTimeAdd(lastTime, clipIndex.duration)
} catch {
print("Failed to insert track")
}
}
print("VideoComposition Tracks: \(videoComposition.tracks.count)") // Shows multiple tracks
playerItem = AVPlayerItem(asset: videoComposition)
print("PlayerItem Duration: \(playerItem.duration.seconds)") // Shows the duration of all tracks together
print("PlayerItem Tracks: \(playerItem.tracks.count)") // Shows same number of tracks as the VideoComposition Track count
videoPlayer = AVPlayer(playerItem: playerItem)
playerLayer.player = videoPlayer
videoPlayer.volume = 0.0
videoPlayer.play() // Only plays the first track
I was able to figure out an answer the most important question. In order to play all of the clips together, they need to be in the same track. To do this, move the following line outside (before) the for loop:
let videoCompositionTrack = videoComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
Here is the full, corrected code:
let playerLayer: AVPlayerLayer = AVPlayerLayer()
lazy var videoPlayer: AVPlayer = AVPlayer()
var videoClips = [AVAsset]()
let videoComposition = AVMutableComposition()
var playerItem: AVPlayerItem!
var lastTime: CMTime = kCMTimeZero
let videoCompositionTrack = videoComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
for clipIndex in videoClips {
do {
try videoCompositionTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipIndex.duration),
ofTrack: clipIndex.tracksWithMediaType(AVMediaTypeVideo)[0] ,
atTime: lastTime)
lastTime = CMTimeAdd(lastTime, clipIndex.duration)
} catch {
print("Failed to insert track")
}
}
print("VideoComposition Tracks: \(videoComposition.tracks.count)") // Shows multiple tracks
playerItem = AVPlayerItem(asset: videoComposition)
print("PlayerItem Duration: \(playerItem.duration.seconds)") // Shows the duration of all tracks together
print("PlayerItem Tracks: \(playerItem.tracks.count)") // Shows same number of tracks as the VideoComposition Track count
videoPlayer = AVPlayer(playerItem: playerItem)
playerLayer.player = videoPlayer
videoPlayer.volume = 0.0
videoPlayer.play() // Does play all clips sequentially
EDIT: I mentioned earlier that I was still wondering how to play multiple tracks in one asset. That's not how it works, I understand now. A good resource:
https://developer.apple.com/library/mac/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html
Related
I have an audio url (.m4a) that I create using the AVAudioRecorder. I want to share that audio on Instagram so I convert the audio to a video. The issue is after the conversion, when I save the video url to the Files app using the UIActivityViewController, I can replay the video, see the time (eg 7 seconds) and hear the audio with no problem. A black screen with a sound icon appears.
But when I save the video to the Photos Library using the UIActivityViewController, the video shows the 7 seconds but nothing plays, the video is all gray, and the sound icon doesn't show.
Why is the video successfully saving/playing in the Files app but saving and not playing in the Photos Library?
let asset: AVURLAsset = AVURLAsset(url: audioURL)
let mixComposition = AVMutableComposition()
guard let compositionTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: CMPersistentTrackID()) else { return }
let track = asset.tracks(withMediaType: .audio)
guard let assetTrack = track.first else { return }
do {
try compositionTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: assetTrack.timeRange.duration), of: assetTrack, at: .zero)
} catch {
print(error.localizedDescription)
}
guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetPassthrough) else { return }
let dirPath = NSTemporaryDirectory().appending("\(UUID().uuidString).mov")
let outputFileURL = URL(fileURLWithPath: dirPath)
exporter.outputFileType = .mov
exporter.outputURL = outputFileURL
exporter.shouldOptimizeForNetworkUse = true
exporter.exportAsynchronously {
switch exporter.status {
// ...
guard let videoURL = exporter.outputURL else { return }
// present UIActivityViewController to save videoURL and then save it to the Photos Library via 'Save Video`
}
}
As Lance rightfully pointed out, the issue is that while there was an export of a file in the .mov or .mp4 format, there was no video, it was just an audio playing.
On reading a bit more, .mp4 for example is just a digital multimedia container format which can very well just be used for audio so it's possible to save audio file as a .mp4 / .mov.
What was needed was to add an empty video track to the AVMutableComposition to succeed. Lance already posted a great solution works perfectly well and is more self sustained than an alternative solution I propose which relies on having a blank 1 second video.
Overview of how it works
You get a blank video file that is 1 second long in the resolution you want, for example 1920 x 1080
You retrieve the video track from this video asset
Retrieve the audio track from your audio file
Create an AVMutableComposition which will be used to merge the audio and video tracks
Configure an AVMutableCompositionTrack with the audio track and add that to the main AVMutableComposition
Configure an AVMutableVideoComposition with the video track
Use an AVAssetExportSession to export the final video with the AVMutableComposition and the AVMutableVideoComposition
The code
In most of the code below you will see multiple guard statements. You can create one guard, however, it can be useful to know with such types of tasks where the failure occurred as there could be several reason why an export could fail.
Configuring the audio track
private func configureAudioTrack(_ audioURL: URL,
inComposition composition: AVMutableComposition) -> AVMutableCompositionTrack?
{
// Initialize an AVURLAsset with your audio file
let audioAsset: AVURLAsset = AVURLAsset(url: audioURL)
let trackTimeRange = CMTimeRange(start: .zero,
duration: audioAsset.duration)
// Get the audio track from the audio asset
guard let sourceAudioTrack = audioAsset.tracks(withMediaType: .audio).first
else
{
manageError(nil, withMessage: "Error retrieving audio track from source file")
return nil
}
// Insert a new video track to the AVMutableComposition
guard let audioTrack = composition.addMutableTrack(withMediaType: .audio,
preferredTrackID: CMPersistentTrackID())
else
{
// manage your error
return nil
}
do {
// Inset the contents of the audio source into the new audio track
try audioTrack.insertTimeRange(trackTimeRange,
of: sourceAudioTrack,
at: .zero)
}
catch {
// manage your error
}
return audioTrack
}
Configuring the video track
private func configureVideoTrack(inComposition composition: AVMutableComposition) -> AVMutableCompositionTrack?
{
// Initialize a video asset with the empty video file
guard let blankMoviePathURL = Bundle.main.url(forResource: "blank",
withExtension: ".mp4"),
let videoAsset = AVAsset(url: blankMoviePathURL)
else
{
// manage errors
return nil
}
// Get the video track from the empty video
guard let sourceVideoTrack = videoAsset.tracks(withMediaType: .video).first
else
{
// manage errors
return nil
}
// Insert a new video track to the AVMutableComposition
guard let videoTrack = composition.addMutableTrack(withMediaType: .video,
preferredTrackID: kCMPersistentTrackID_Invalid)
else
{
// manage errors
return nil
}
let trackTimeRange = CMTimeRange(start: .zero,
duration: composition.duration)
do {
// Inset the contents of the video source into the new audio track
try videoTrack.insertTimeRange(trackTimeRange,
of: sourceVideoTrack,
at: .zero)
}
catch {
// manage errors
}
return videoTrack
}
Configure the video composition
// Configure the video properties like resolution and fps
private func createVideoComposition(with videoCompositionTrack: AVMutableCompositionTrack) -> AVMutableVideoComposition
{
let videoComposition = AVMutableVideoComposition()
// Set the fps
videoComposition.frameDuration = CMTime(value: 1,
timescale: 25)
// Video dimensions
videoComposition.renderSize = CGSize(width: 1920, height: 1080)
// Specify the duration of the video composition
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRange(start: .zero, duration: .indefinite)
// Add the video composition track to a new layer
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoCompositionTrack)
let transform = videoCompositionTrack.preferredTransform
layerInstruction.setTransform(transform, at: .zero)
// Apply the layer configuration instructions
instruction.layerInstructions = [layerInstruction]
videoComposition.instructions = [instruction]
return videoComposition
}
Configure the AVAssetExportSession
private func configureAVAssetExportSession(with composition: AVMutableComposition,
videoComposition: AVMutableVideoComposition) -> AVAssetExportSession?
{
// Configure export session
guard let exporter = AVAssetExportSession(asset: composition,
presetName: AVAssetExportPresetHighestQuality)
else
{
// Manage your errors
return nil
}
// Configure where the exported file will be stored
let documentsURL = FileManager.default.urls(for: .documentDirectory,
in: .userDomainMask)[0]
let fileName = "\(UUID().uuidString).mov"
let dirPath = documentsURL.appendingPathComponent(fileName)
let outputFileURL = dirPath
// Apply exporter settings
exporter.videoComposition = videoComposition
exporter.outputFileType = .mov
exporter.outputURL = outputFileURL
exporter.shouldOptimizeForNetworkUse = true
return exporter
}
Over here, one important thing to not is to set the exporter's present quality to a movie present like AVAssetExportPresetHighestQuality or AVAssetExportPresetLowQuality for example, something other than AVAssetExportPresetPassthrough which as per the documentation,
A preset to export the asset in its current format, unless otherwise
prohibited.
So you would still get an audio mp4 or mov file since the current format of the composition is of an audio. I did not test this extensively but this is from a few tests.
Finally, you can bring it all the above functions together like so:
func generateMovie(with audioURL: URL)
{
delegate?.audioMovieExporterDidStart(self)
let composition = AVMutableComposition()
// Configure the audio and video tracks in the new composition
guard let _ = configureAudioTrack(audioURL, inComposition: composition),
let videoCompositionTrack = configureVideoTrack(inComposition: composition)
else
{
// manage error
return
}
let videoComposition = createVideoComposition(with: videoCompositionTrack)
if let exporter = configureAVAssetExportSession(with: composition,
videoComposition: videoComposition)
{
exporter.exportAsynchronously
{
switch exporter.status {
case .completed:
guard let videoURL = exporter.outputURL
else
{
// manage errors
return
}
// notify someone the video is ready at videoURL
default:
// manege error
}
}
}
}
Final Thoughts
You could test drive a working sample here
I converted this into a simple library if you wish to use it where you can configure the orientation, fps and even set a background color to the video - available at the same link
If you just want the blank videos, you can get them from here
So it seems that although the code from my question did covert the audio file to a video file, there still wasn't a video track. I know this for a fact because after I got the exporter's videoURL from my question, I tried to add a watermark to it and in the watermark code it kept crashing on
let videoTrack = asset.tracks(withMediaType: AVMediaType.video)[0]
Basically the code from my question coverts audio to video but doesn't add a video track.
What I assume is happening is when the Files app reads the file, it knows that it's a .mov or .mp4 file and then it'll play the audio track even if the video track is missing.
Conversely, when the Photos app reads the file it also know's that it's a .mov or .mp4 file but if there isn't a video track, it won't play anything.
I had to combine these 2 answers to get the audio to play as a video in the Photos app.
1st- I added my app icon (you can add any image) as 1 image to an array of images to make a video track using the code from How do I export UIImage array as a movie? answered by #scootermg.
The code from #scootermg's answer is conveniently in 1 file at this GitHub by #dldnh. In his code, in the ImageAnimator class, in the render function, instead of saving to the Library I returned the videoWriter's output URL in the completionHandler.
2nd- I combined the app icon video that I just made with the audio url from my question using the code from Swift Merge audio and video files into one video answered by #TungFam
In the mixCompostion from TungFam's answer I used the audio url's asset duration for the length of the video.
do {
try mutableCompositionVideoTrack[0].insertTimeRange(CMTimeRangeMake(start: .zero,
duration: aAudioAssetTrack.timeRange.duration),
of: aVideoAssetTrack,
at: .zero)
try mutableCompositionAudioTrack[0].insertTimeRange(CMTimeRangeMake(start: .zero,
duration: aAudioAssetTrack.timeRange.duration),
of: aAudioAssetTrack,
at: .zero)
if let aAudioOfVideoAssetTrack = aAudioOfVideoAssetTrack {
try mutableCompositionAudioOfVideoTrack[0].insertTimeRange(CMTimeRangeMake(start: .zero,
duration: aAudioAssetTrack.timeRange.duration),
of: aAudioOfVideoAssetTrack,
at: .zero)
}
} catch {
print(error.localizedDescription)
}
The following function "func mergeVideos()" reads video files prerecorded and makes a new movie file with tracks created with those files. dump(mixComposition.tracks returns following for two video file tracks.
*▿ 2 elements
- <AVMutableCompositionTrack: 0x14661a30 trackID = 1, mediaType = vide, editCount = 1> #0
- super: AVCompositionTrack
- super: AVAssetTrack
- super: NSObject
- <AVMutableCompositionTrack: 0x14592bc0 trackID = 2, mediaType = vide, editCount = 1> #1
- super: AVCompositionTrack
- super: AVAssetTrack
- super: NSObject*
The problem is written, created new video's playing duration is as long as the first video duration. Insert time and duration time of added videos as follows and there is no problem with them.
▿ 2 elements
- "1.mp4"
- "video.mp4"
First video's insert and duration.
time= (0.0, 5.2316666666666665)
Second video's insert and duration.
time= (5.2316666666666665, 5.4366666666666665)
After hours of search we could not find similar problem so decided to ask.
func mergeVideos(){
let mixComposition = AVMutableComposition.init()
var timeRange: CMTimeRange!
var insertTime = kCMTimeZero
for k in 0..<videoListOnDisk.count {
let url = URL(fileURLWithPath: NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]).appendingPathComponent(videoListOnDisk[k])
let videoAsset = AVURLAsset(url: url)
let track = mixComposition.addMutableTrack(withMediaType: AVMediaType.video,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
timeRange = CMTimeRangeMake(kCMTimeZero,videoAsset.duration )
do {
try track?.insertTimeRange(timeRange, of: videoAsset.tracks(withMediaType: .video)[0], at: insertTime)
} catch let error as NSError {
print("error when adding video to mix = \(error)")
}
insertTime = CMTimeAdd(insertTime, videoAsset.duration)
}
dump(mixComposition.tracks)
let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
exporter!.outputURL = fileURL(combinedVideoFileName)
exporter!.outputFileType = AVFileType.mp4
exporter!.shouldOptimizeForNetworkUse = false
exporter!.exportAsynchronously() {
DispatchQueue.main.async(execute: { () -> Void in
print("I am done with exporting \(exporter?.status.rawValue)")
})
}
}
The problem is the repetition of this line:
let track = mixComposition.addMutableTrack...
Put that before the loop, so that you create only one video track and insert all the video clips into that one track.
I recorded a 240 fps video after changing the AVCaptureDeviceFormat. If I save that video in the photo library, the slowmo effect is there. But, If I play that file from documents directory, using an AVPlayer, I cant see the slowmo effect.
Code to play the video:
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:[AVAsset assetWithURL:[NSURL fileURLWithPath:fullPath]]];
AVPlayer *feedVideoPlayer = [AVPlayer playerWithPlayerItem:playerItem];
AVPlayerViewController *playerController = [[AVPlayerViewController alloc] init];
playerController.view.frame = CGRectMake(0, 0, videoPreviewView.frame.size.width, videoPreviewView.frame.size.height);
playerController.player = feedVideoPlayer;
It's a bit annoying, but I believe you'll need to re-create the video in an AVComposition if you don't want to lose quality. I'd love to know if there is another way, but this is what I've come up with. You can technically export the video via AVAssetExportSession, but using a PassThrough quality will result in the same video file, which won't be slow motion- you'll need to transcode it, which loses quality (AFAIK. See Issue playing slow-mo AVAsset in AVPlayer for that solution).
The first thing you'll need to do is grab the source media's original time mapping objects. You can do that like so:
let options = PHVideoRequestOptions()
options.version = PHVideoRequestOptionsVersion.current
options.deliveryMode = .highQualityFormat
PHImageManager().requestAVAsset(forVideo: phAsset, options: options, resultHandler: { (avAsset, mix, info) in
guard let avAsset = avAsset else { return }
let originalTimeMaps = avAsset.tracks(withMediaType: AVMediaTypeVideo)
.first?
.segments
.flatMap { $0.timeMapping } ?? []
}
Once you have timeMappings of the original media (the one sitting in your documents directory), you can pass in the URL of that media and the original CMTimeMapping objects that you would like to recreate. Then create a new AVComposition that is ready to play in an AVPlayer. You'll need a class similar to this:
class CompositionMapper {
let url: URL
let timeMappings: [CMTimeMapping]
init(for url: URL, with timeMappings: [CMTimeMapping]) {
self.url = url
self.timeMappings = timeMappings
}
init(with asset: AVAsset, and timeMappings: [CMTimeMapping]) {
guard let asset = asset as? AVURLAsset else {
print("cannot get a base URL from this asset.")
fatalError()
}
self.timeMappings = timeMappings
self.url = asset.url
}
func compose() -> AVComposition {
let composition = AVMutableComposition(urlAssetInitializationOptions: [AVURLAssetPreferPreciseDurationAndTimingKey: true])
let emptyTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
let audioTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid)
let asset = AVAsset(url: url)
guard let videoAssetTrack = asset.tracks(withMediaType: AVMediaTypeVideo).first else { return composition }
var segments: [AVCompositionTrackSegment] = []
for map in timeMappings {
let segment = AVCompositionTrackSegment(url: url, trackID: kCMPersistentTrackID_Invalid, sourceTimeRange: map.source, targetTimeRange: map.target)
segments.append(segment)
}
emptyTrack.preferredTransform = videoAssetTrack.preferredTransform
emptyTrack.segments = segments
if let _ = asset.tracks(withMediaType: AVMediaTypeVideo).first {
audioTrack.segments = segments
}
return composition.copy() as! AVComposition
}
You can then use the compose() function of your CompositionMapper class to give you an AVComposition that is ready to play in an AVPlayer, which should respect the CMTimeMapping objects that you've passed in.
let compositionMapper = CompositionMapper(for: someAVAssetURL, with: originalTimeMaps)
let mappedComposition = compositionMapper.compose()
let playerItem = AVPlayerItem(asset: mappedComposition)
let player = AVPlayer(playerItem: playerItem)
playerItem.audioTimePitchAlgorithm = AVAudioTimePitchAlgorithmVarispeed
Let me know if you need help converting this to Objective-C, but it should be relatively straight forward.
I'm working on an app that merges multiple video clips into one final video. I would like to give users the ability to mute individual clips if desired (so, only parts of the final merged video would be muted). I have wrapped the AVAssets in a class called "Video" that has a "shouldMute" property.
My problem is, when I set the volume of one of the AVAssetTracks to zero, it stays muted for the remainder of the final video. Here is my code:
var completeDuration : CMTime = CMTimeMake(0, 1)
var insertTime = kCMTimeZero
var layerInstructions = [AVVideoCompositionLayerInstruction]()
let mixComposition = AVMutableComposition()
let audioMix = AVMutableAudioMix()
let videoTrack =
mixComposition.addMutableTrack(withMediaType: AVMediaType.video,
preferredTrackID: kCMPersistentTrackID_Invalid)
let audioTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
// iterate through video assets and merge together
for (i, video) in clips.enumerated() {
let videoAsset = video.asset
var clipDuration = videoAsset.duration
do {
if video == clips.first {
insertTime = kCMTimeZero
} else {
insertTime = completeDuration
}
if let videoAssetTrack = videoAsset.tracks(withMediaType: .video).first {
try videoTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipDuration), of: videoAssetTrack, at: insertTime)
completeDuration = CMTimeAdd(completeDuration, clipDuration)
}
if let audioAssetTrack = videoAsset.tracks(withMediaType: .audio).first {
try audioTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipDuration), of: audioAssetTrack, at: insertTime)
if video.shouldMute {
let audioMixInputParams = AVMutableAudioMixInputParameters()
audioMixInputParams.trackID = audioTrack!.trackID
audioMixInputParams.setVolume(0.0, at: insertTime)
audioMix.inputParameters.append(audioMixInputParams)
}
}
} catch let error as NSError {
print("error: \(error)")
}
let videoInstruction = videoCompositionInstructionForTrack(track: videoTrack!, video: video)
if video != clips.last{
videoInstruction.setOpacity(0.0, at: completeDuration)
}
layerInstructions.append(videoInstruction)
} // end of video asset iteration
If I add another setVolume:atTime instruction to increase the volume back to 1.0 at the end of the clip, then the first volume instruction is completely ignored and the whole video plays at full volume.
In other words, this isn't working:
if video.shouldMute {
let audioMixInputParams = AVMutableAudioMixInputParameters()
audioMixInputParams.trackID = audioTrack!.trackID
audioMixInputParams.setVolume(0.0, at: insertTime)
audioMixInputParams.setVolume(1.0, at: completeDuration)
audioMix.inputParameters.append(audioMixInputParams)
}
I have set the audioMix on both my AVPlayerItem and AVAssetExportSession. What am I doing wrong? What can I do to allow users to mute the time ranges of individual clips before merging into the final video?
Apparently I was going about this wrong. As you can see above, my composition has two AVMutableCompositionTracks: a video track, and an audio track. Even though I inserted the time ranges of a series of other tracks into those two tracks, there's still ultimately only two tracks. So, I only needed one AVMutableAudioMixInputParameters object to associate with my one audio track.
I initialized a single AVMutableAudioMixInputParameters object and then, after I inserted the time range of each clip, I'd check to see whether it should be muted and set a volume ramp for the clip's time range (the time range in relation to the entire audio track). Here's what that looks like, inside my clip iteration:
if let audioAssetTrack = videoAsset.tracks(withMediaType: .audio).first {
try audioTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipDuration), of: audioAssetTrack, at: insertTime)
if video.shouldMute {
audioMixInputParams.setVolumeRamp(fromStartVolume: 0.0, toEndVolume: 0.0, timeRange: CMTimeRangeMake(insertTime, clipDuration))
} else {
audioMixInputParams.setVolumeRamp(fromStartVolume: 1.0, toEndVolume: 1.0, timeRange: CMTimeRangeMake(insertTime, clipDuration))
}
}
I am using AVPlayer to play local video in background using loop and video is playing fine but after finishing video it takes pause to play video in loop. I have tried many methods and also seen many post on stack overflow but i failed to find appropriate solution. I am using Swift3.
Code is here :
var videoplayer :AVPlayer = AVPlayer()
override func viewDidLoad() {
super.viewDidLoad()
let path = Bundle.main.path(forResource: "background4", ofType: "mp4")
videoplayer = AVPlayer(url: URL(fileURLWithPath: path!))
videoplayer.volume = 0
videoplayer.actionAtItemEnd = AVPlayerActionAtItemEnd.none;
let playerLayer = AVPlayerLayer(player: videoplayer)
playerLayer.frame = self.view.frame
playerLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
if (videoplayer.rate != 0) {
print("playing videoplayer")
self.blurBgImage.isHidden = true
}
playerLayer.zPosition = -1
videoplayer.rate = 0
videoplayer.play()
self.blurBgImage.layer.addSublayer(playerLayer)
NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: videoplayer.currentItem, queue: nil, using: { (_) in
DispatchQueue.main.async {
let t1 = CMTimeMake(5, 100)
self.videoplayer.seek(to: t1)
self.videoplayer.play()
}
})
}
I have also tried AVPlayerLooper.
Code is :
var playerLooper: NSObject?
var playerLayer:AVPlayerLayer!
var queuePlayer: AVQueuePlayer?
func playVideo(_ filmName: String){
if let path = Bundle.main.path(forResource: filmName, ofType: "mp4") let url = URL(fileURLWithPath: path)
if #available(tvOS 10.0, *) {
let playerItem = AVPlayerItem(url: url as URL)
self.videoplayer = AVQueuePlayer(items: [playerItem])
self.playerLayer = AVPlayerLayer(player: self.videoplayer)
self.playerLooper = AVPlayerLooper(player: self.videoplayer as! AVQueuePlayer, templateItem: playerItem)
self.blurBgImage.layer.addSublayer(playerLayer!)
self.playerLayer?.frame = self.view.frame
self.videoplayer.volume = 10
self.videoplayer.play()
} else {
videoplayer = AVPlayer(url: url)
videoplayer.play()
loopVideo(videoplayer)
}
}
}
What should i do for seamless looping? Thanks in Advance.
FYI there is sample code here: https://developer.apple.com/library/content/samplecode/avloopplayer/Introduction/Intro.html
#matt's deleted answer works fine for me (on device / simulator) for iOS 10+ devices:
Use AVPlayerLooper. That is exactly what it is for.
https://developer.apple.com/reference/avfoundation/avplayerlooper
Basically it implements AVQueuePlayer for you, constantly updating the
queue so that it never ends.
It seamlessly loops without any white/black blip.
E.g.
private var looper: AVPlayerLooper?
...
let queuePlayer = AVQueuePlayer(playerItem: item)
looper = AVPlayerLooper(player: queuePlayer, templateItem: item)
videoPlayerLayer.player = queuePlayer
If you end up doing this in a reusable cell (e.g. UICollectionView), then make sure you disable looping before cell re-use or you'll get some obscure crashes:
looper?.disableLooping()