Frame dropping when using AVAssetWriter? - ios

I’m working on an app that processes video frames, draws effects on the frame and saves them. When saving the video using AVAssetWriter I get stutters in the resulting video, but when I reduce the amount of processing on every frame, then stutter is reduced.
Writing and processing are on separate processes.
Every processed frame is dispatched in a queue for writing.
Here is the code:
_writingQueue.async {
autoreleasepool {
synchronized(self) {
if self._status.rawValue >= VideoRecordingModelStatus.finishingRecordingPart1.rawValue {
return
}
if !self._haveStartedSession {
self._assetWriter?.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
self._haveStartedSession = true
}
let input = (mediaType == AVMediaType.video) ? self._videoInput : self._audioInput
while !(input?.isReadyForMoreMediaData ?? false) {}
let success = input!.append(sampleBuffer)
if !success {
let error = self._assetWriter?.error
synchronized(self) {
self.transitionToStatus(.failed, error: error as NSError?)
}
}
}
}
}
Resulting video

Related

"__CFRunLoopModeFindSourceForMachPort returned NULL" messages when using AVAudioPlayer

We're working on a SpriteKit game. In order to have more control over sound effects, we switched from using SKAudioNodes to having some AVAudioPlayers. While everything seems to be working well in terms of game play, frame rate, and sounds, we're seeing occasional error(?) messages in the console output when testing on physical devices:
... [general] __CFRunLoopModeFindSourceForMachPort returned NULL for mode 'kCFRunLoopDefaultMode' livePort: #####
It doesn't seem to really cause any harm when it happens (no sound glitches or hiccups in frame rate or anything), but not understanding exactly what the message means and why it's happening is making us nervous.
Details:
The game is all standard SpriteKit, all events driven by SKActions, nothing unusual there.
The uses of AVFoundation stuff are the following. Initialization of app sounds:
class Sounds {
let soundQueue: DispatchQueue
init() {
do {
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print(error.localizedDescription)
}
soundQueue = DispatchQueue.global(qos: .background)
}
func execute(_ soundActions: #escaping () -> Void) {
soundQueue.async(execute: soundActions)
}
}
Creating various sound effect players:
guard let player = try? AVAudioPlayer(contentsOf: url) else {
fatalError("Unable to instantiate AVAudioPlayer")
}
player.prepareToPlay()
Playing a sound effect:
let pan = stereoBalance(...)
sounds.execute {
if player.pan != pan {
player.pan = pan
}
player.play()
}
The AVAudioPlayers are all for short sound effects with no looping, and they get reused. We create about 25 players total, including multiple players for certain effects when they can repeat in quick succession. For a particular effect, we rotate through the players for that effect in a fixed sequence. We have verified that whenever a player is triggered, its isPlaying is false, so we're not trying to invoke play on something that's already playing.
The message isn't that often. Over the course of a 5-10 minute game with possibly thousands of sound effects, we see the message maybe 5-10 times.
The message seems to occur most commonly when a bunch of sound effects are being played in quick succession, but it doesn't feel like it's 100% correlated with that.
Not using the dispatch queue (i.e., having sounds.execute just call soundActions() directly) doesn't fix the issue (though that does cause the game to lag significantly). Changing the dispatch queue to some of the other priorities like .utility also doesn't affect the issue.
Making sounds.execute just return immediately (i.e., don't actually call the closure at all, so there's no play()) does eliminate the messages.
We did find the source code that's producing the message at this link:
https://github.com/apple/swift-corelibs-foundation/blob/master/CoreFoundation/RunLoop.subproj/CFRunLoop.c
but we don't understand it except at an abstract level, and are not sure how run loops are involved in the AVFoundation stuff.
Lots of googling has turned up nothing helpful. And as I indicated, it doesn't seem to be causing noticeable problems at all. It would be nice to know why it's happening though, and either how to fix it or to have certainty that it won't ever be an issue.
We're still working on this, but have experimented enough that it's clear how we should do things. Outline:
Use the scene's audioEngine property.
For each sound effect, make an AVAudioFile for reading the audio's URL from the bundle. Read it into an AVAudioPCMBuffer. Stick the buffers into a dictionary that's indexed by sound effect.
Make a bunch of AVAudioPlayerNodes. Attach() them to the audioEngine. Connect(playerNode, to: audioEngine.mainMixerNode). At the moment we're creating these dynamically, searching through our current list of player nodes to find one that's not playing and making a new one if there's none available. That's probably got more overhead than is needed, since we have to have callbacks to observe when the player node finishes whatever it's playing and set it back to a stopped state. We'll try switching to just a fixed maximum number of active sound effects and rotating through the players in order.
To play a sound effect, grab the buffer for the effect, find a non-busy playerNode, and do playerNode.scheduleBuffer(buffer, ...). And playerNode.play() if it's not currently playing.
I may update this with some more detailed code once we have things fully converted and cleaned up. We still have a couple of long-running AVAudioPlayers that we haven't switched to use AVAudioPlayerNode going through the mixer. But anyway, pumping the vast majority of sound effects through the scheme above has eliminated the error message, and it needs far less stuff sitting around since there's no duplication of the sound effects in-memory like we had before. There's a tiny bit of lag, but we haven't even tried putting some stuff on a background thread yet, and maybe not having to search for and constantly start/stop players would even eliminate it without having to worry about that.
Since switching to this approach, we've had no more runloop complaints.
Edit: Some example code...
import SpriteKit
import AVFoundation
enum SoundEffect: String, CaseIterable {
case playerExplosion = "player_explosion"
// lots more
var url: URL {
guard let url = Bundle.main.url(forResource: self.rawValue, withExtension: "wav") else {
fatalError("Sound effect file \(self.rawValue) missing")
}
return url
}
func audioBuffer() -> AVAudioPCMBuffer {
guard let file = try? AVAudioFile(forReading: self.url) else {
fatalError("Unable to instantiate AVAudioFile")
}
guard let buffer = AVAudioPCMBuffer(pcmFormat: file.processingFormat, frameCapacity: AVAudioFrameCount(file.length)) else {
fatalError("Unable to instantiate AVAudioPCMBuffer")
}
do {
try file.read(into: buffer)
} catch {
fatalError("Unable to read audio file into buffer, \(error.localizedDescription)")
}
return buffer
}
}
class Sounds {
var audioBuffers = [SoundEffect: AVAudioPCMBuffer]()
// more stuff
init() {
for effect in SoundEffect.allCases {
preload(effect)
}
}
func preload(_ sound: SoundEffect) {
audioBuffers[sound] = sound.audioBuffer()
}
func cachedAudioBuffer(_ sound: SoundEffect) -> AVAudioPCMBuffer {
guard let buffer = audioBuffers[sound] else {
fatalError("Audio buffer for \(sound.rawValue) was not preloaded")
}
return buffer
}
}
class Globals {
// Sounds loaded once and shared amount all scenes in the game
static let sounds = Sounds()
}
class SceneAudio {
let stereoEffectsFrame: CGRect
let audioEngine: AVAudioEngine
var playerNodes = [AVAudioPlayerNode]()
var nextPlayerNode = 0
// more stuff
init(stereoEffectsFrame: CGRect, audioEngine: AVAudioEngine) {
self.stereoEffectsFrame = stereoEffectsFrame
self.audioEngine = audioEngine
do {
try audioEngine.start()
let buffer = Globals.sounds.cachedAudioBuffer(.playerExplosion)
// We got up to about 10 simultaneous sounds when really pushing the game
for _ in 0 ..< 10 {
let playerNode = AVAudioPlayerNode()
playerNodes.append(playerNode)
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: buffer.format)
playerNode.play()
}
} catch {
logging("Cannot start audio engine, \(error.localizedDescription)")
}
}
func soundEffect(_ sound: SoundEffect, at position: CGPoint = .zero) {
guard audioEngine.isRunning else { return }
let buffer = Globals.sounds.cachedAudioBuffer(sound)
let playerNode = playerNodes[nextPlayerNode]
nextPlayerNode = (nextPlayerNode + 1) % playerNodes.count
playerNode.pan = stereoBalance(position)
playerNode.scheduleBuffer(buffer)
}
func stereoBalance(_ position: CGPoint) -> Float {
guard stereoEffectsFrame.width != 0 else { return 0 }
guard position.x <= stereoEffectsFrame.maxX else { return 1 }
guard position.x >= stereoEffectsFrame.minX else { return -1 }
return Float((position.x - stereoEffectsFrame.midX) / (0.5 * stereoEffectsFrame.width))
}
}
class GameScene: SKScene {
var audio: SceneAudio!
// lots more stuff
// somewhere in initialization
// gameFrame is the area where action takes place and which
// determines panning for stereo sound effects
audio = SceneAudio(stereoEffectsFrame: gameFrame, audioEngine: audioEngine)
func destroyPlayer(_ player: SKSpriteNode) {
audio.soundEffect(.playerExplosion, at: player.position)
// more stuff
}
}

Record video with AVAssetWriter: first frames are black

I am recording video (the user also can switch to audio only) with AVAssetWriter. I start the recording when the app is launched.
But the first frames are black (or very dark). This also happens when I switch from audio to video.
It feels like the AVAssetWriter and/or AVAssetWriterInput are not yet ready to record. How can I avoid this?
I don't know if this is a useful info but I also use a GLKView to display the video.
func start_new_record(){
do{
try self.file_writer=AVAssetWriter(url: self.file_url!, fileType: AVFileTypeMPEG4)
if video_on{
if file_writer.canAdd(video_writer){
file_writer.add(video_writer)
}
}
if file_writer.canAdd(audio_writer){
file_writer.add(audio_writer)
}
}catch let e as NSError{
print(e)
}
}
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!){
guard is_recording else{
return
}
guard CMSampleBufferDataIsReady(sampleBuffer) else{
print("data not ready")
return
}
guard let w=file_writer else{
print("video writer nil")
return
}
if w.status == .unknown && start_recording_time==nil{
if (video_on && captureOutput==video_output) || (!video_on && captureOutput==audio_output){
print("START RECORDING")
file_writer?.startWriting()
start_recording_time=CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
file_writer?.startSession(atSourceTime: start_recording_time!)
}else{
return
}
}
if w.status == .failed{
print("failed /", w.error ?? "")
return
}
if captureOutput==audio_output{
if audio_writer.isReadyForMoreMediaData{
if !video_on || (video_on && video_written){
audio_writer.append(sampleBuffer)
//print("write audio")
}
}else{
print("audio writer not ready")
}
}else if video_output != nil && captureOutput==video_output{
if video_writer.isReadyForMoreMediaData{
video_writer.append(sampleBuffer)
if !video_written{
print("added 1st video frame")
video_written=true
}
}else{
print("video writer not ready")
}
}
}
SWIFT 4
SOLUTION #1:
I resolved this by calling file_writer?.startWriting() as soon as possible upon launching the app. Then when you want to start recording, do the file_writer?.startSession(atSourceTime:...).
When you are done recording and call finishRecording, when you get the callback that says that's complete, set up a new writing session again.
SOLUTION #2:
I resolved this by adding half a second to the starting time when calling AVAssetWriter.startSession, like this:
start_recording_time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
let startingTimeDelay = CMTimeMakeWithSeconds(0.5, 1000000000)
let startTimeToUse = CMTimeAdd(start_recording_time!, startingTimeDelay)
file_writer?.startSession(atSourceTime: startTimeToUse)
SOLUTION #3:
A better solution here is to record the timestamp of the first frame you receive and decide to write, and then start your session with that. Then you don't need any delay:
//Initialization, elsewhere:
var is_session_started = false
var videoStartingTimestamp = CMTime.invalid
// In code where you receive frames that you plan to write:
if (!is_session_started) {
// Start writing at the timestamp of our earliest sample
videoStartingTimestamp = currentTimestamp
print ("First video sample received: Starting avAssetWriter Session: \(videoStartingTimestamp)")
avAssetWriter?.startSession(atSourceTime: videoStartingTimestamp)
is_session_started = true
}
// add the current frame
pixelBufferAdapter?.append(myPixelBuffer, withPresentationTime: currentTimestamp)
Ok, stupid mistake...
When launching the app, I init my AVCaptureSession, add inputs, outputs, etc. And I was just calling start_new_record a bit too soon, just before commitConfiguration was called on my capture session.
At least my code might be useful to some people.
This is for future users...
None of the above worked for me and then I tried changing the camera preset to medium which worked fine

Switching Cameras slow in AVCaptureSession

I've looked at many other questions like this, and tried a lot of the solutions, but this case is a bit different. I'm using AVCaptureVideoDataOutputSampleBufferDelegate so that I can apply CIFilters to the live video feed. I'm using the following method to change cameras:
func changeCameras() {
captureSession.stopRunning()
var desiredPosition: AVCaptureDevicePosition?
if front {
desiredPosition = AVCaptureDevicePosition.Back
} else {
desiredPosition = AVCaptureDevicePosition.Front
}
let devices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo) as? [AVCaptureDevice]
for device in devices! {
if device.position == desiredPosition {
self.captureSession.beginConfiguration()
do {
let input = try AVCaptureDeviceInput(device: device)
for oldInput in self.captureSession.inputs {
print(oldInput)
self.captureSession.removeInput(oldInput as! AVCaptureInput)
}
print(input)
self.captureSession.addInput(input)
self.captureSession.commitConfiguration()
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.captureSession.startRunning()
})
} catch { print("evic failed")}
}
}
front = !front
}
The methods that I am using to set up the camera (called in viewDidLoad) and receive the sampleBuffer from the delegate are here: https://gist.github.com/JoeyBodnar/17e22e3c04093caa54cf240ed8b1b601.
One problem is that when pressing the button to change cameras, it takes a solid 4-5 seconds of the screen freezing before changing. I've tried the above method, as well as creating a separate queue to run the entire function on, and it still takes a long time. I've never had this problem when switching cameras just using the regular AVVideoPreviewLayer, so I think this may be caused in part by the fact that i'm using the sample buffer delegate, but can't quite piece together how/why. Any help is appreciated. thanks!

iOS Game Center Leaderboard forces a default score of 0

I'm currently working on a racing game where the player's goal is to complete a task in the shortest time possible.
I need to save the users fastest time, but the Game Center always has a default time of 0 seconds. And since Game Center only saves 1 value per user, it tosses the player's actual time/score.
The only thing I was able to do was to temporarily set the leaderboard to save the longest time. That way a time longer than 0 secs would be saved. Then I would switch it back so that shorter times would get saved.
This is obviously not a solution for a production level game.
Is there any wrong with the way I'm saving or retrieving scores? I'm at a loss for how I can either turn off that default 0 sec time or override it.
func loadLeaderBoard(level level:Int){
let leaderboardRequest = GKLeaderboard() as GKLeaderboard!
leaderboardRequest.identifier = "level\(level)ID"
if leaderboardRequest != nil {
leaderboardRequest.loadScoresWithCompletionHandler({ (score, error) -> Void in
if error != nil {
print(error)
} else {
if let myscore:[GKScore] = score {
self.records.updateValue(myscore.first!.formattedValue!, forKey: level)
}
}
})
}
}
func recordTime(level level: Int, record:Int64){
let score = GKScore(leaderboardIdentifier: "level1ID")
score.value = record
GKScore.reportScores([score]) { (error) -> Void in
if error != nil {
print(error)
}else {
print("Score reported: \(score.value)")
}
}
}

Adding game center achievements to my game

I was trying to add achievements to one of my games, I made a list and started adding them with image and description in itunes connect, but I can't find a tutorial written in swift, and with a simple function like unlockachievement and the achievement id passed inside that
In my game, inside the viewdidload of the game over screen i made this
if (achievement condition){
//unlock achievement ("achievementID")
}
Is this a correct way to add achievements?In this view i have many stats passed from previous view so if I put the unlockachievement function here the popup should appear at the end of the game without distracting the player
For the leaderboards I made a func that gets score and leaderboard, I want to do the same for achievements, how do I do it?Is it possible?
I also read about achievement progress in the game center view,is it possible to avoid it?Especially with hidden achievements that should show up only when conditions are met
this is how I solved
func loadAchievementPercentages() {
print("get % ach")
GKAchievement.loadAchievementsWithCompletionHandler { (allAchievements, error) -> Void in
if error != nil {
print("GC could not load ach, error:\(error)")
}
else
{
//nil if no progress on any achiement
if(allAchievements != nil) {
for theAchiement in allAchievements! {
if let singleAchievement:GKAchievement = theAchiement {
self.gameCenterAchievements[singleAchievement.identifier!] = singleAchievement
}
}
}
for(id,achievement) in self.gameCenterAchievements {
print("\(id) - \(achievement.percentComplete)")
}
}
}
}
func incrementCurrentPercentageOfAchievement (identifier:String, amount:Double) {
if GKLocalPlayer.localPlayer().authenticated {
var currentPercentFound:Bool = false
if ( gameCenterAchievements.count != 0) {
for (id,achievement) in gameCenterAchievements {
if (id == identifier) {
//progress on the achievement found
currentPercentFound = true
var currentPercent:Double = achievement.percentComplete
currentPercent = currentPercent + amount
reportAchievement(identifier,percentComplete:currentPercent)
break
}
}
}
if (currentPercentFound == false) {
//no progress on the achievement
reportAchievement(identifier,percentComplete:amount)
}
}
}
func reportAchievement (identifier:String, percentComplete:Double) {
let achievement = GKAchievement(identifier: identifier)
achievement.percentComplete = percentComplete
let achievementArray: [GKAchievement] = [achievement]
GKAchievement.reportAchievements(achievementArray, withCompletionHandler: {
error -> Void in
if ( error != nil) {
print(error)
}
else {
print ("reported achievement with % complete of \(percentComplete)")
self.gameCenterAchievements.removeAll()
self.loadAchievementPercentages()
}
})
}
in the view did load I have this
if (conditions for the achievement)
{
incrementCurrentPercentageOfAchievement("achievementID", amount: 100)
}
Tested now and it worked,now i want to do something different,I don't want to increment an achievement percentage but to set the percentage, for example:
Achievement - combo of 10
- Play a game
- Combo of 8 and shows 80% progress on the achievement
- Another game
- Combo of 2
With the code I have now it should unlock the achievement because the percentage is cumulative, so I have to write another function to change the achievement progress percentage and to set it to 20
Now if in a game I met the requirements and unlock the achievement,and in the next game I don't get this condition again, the achievement is still unlocked, right?

Resources