Troubles with MPRemoteCommandCenter - ios

I have two troubles with MPRemoteCommandCenter:
1) When I change song, remote controls delete previous song image, make default song image, make next song image. I don't want to see default image. I spent a couple of time to find a solution.
2) When AVPlayer is live streaming sound, remote controls become non-active with circle arrows ( into circle arrow is number 15, what does it mean??).
Here is my code for playing sound:
public func playAVSound(trackName : String) -> String {
let path = _findPath(trackName: trackName);
if (path == "") {
return "getting url"
}
if (self.AVPlayerVC.player == nil) {
print("Init remote control events...")
UIApplication.shared.beginReceivingRemoteControlEvents()
let commandCenter = MPRemoteCommandCenter.shared()
commandCenter.nextTrackCommand.isEnabled = true
commandCenter.nextTrackCommand.addTarget(self, action:#selector(self.next))
commandCenter.previousTrackCommand.isEnabled = true
commandCenter.previousTrackCommand.addTarget(self, action:#selector(self.previous))
}
if (self.AVPlayerVC.player != nil && self.play_info.trackName == trackName) {
if (self.play_info.paused!) {
self.AVPlayerVC.player?.play()
self.updatePlayInfo(number: Global.PlayList.find_by_trackName(trackName: trackName), trackName: trackName, path: path, paused: false)
return "continue"
} else {
self.AVPlayerVC.player?.pause()
self.play_info.paused = true
return "pause"
}
}
let nsurl = NSURL(string: path)
if let url = nsurl {
print("Play AV from : \(url)")
let item = AVPlayerItem(url: url as URL)
if (self.AVPlayerVC.player?.currentItem == nil) {
self.AvPlayer = AVPlayer(playerItem: item)
self.AVPlayerVC.player = self.AvPlayer
self.AVPlayerVC.player?.automaticallyWaitsToMinimizeStalling = false
} else {
self.AvPlayer?.replaceCurrentItem(with: item)
}
NotificationCenter.default.addObserver(self, selector: #selector(self.playerDidFinishPlaying), name: .AVPlayerItemDidPlayToEndTime, object: item)
self.AVPlayerVC.player?.play()
self.updatePlayInfo(number: Global.PlayList.find_by_trackName(trackName: trackName), trackName: trackName, path: path, paused: false)
return "playing"
} else {
print ("Incorrect nsurl")
}
return "error"
}
private func _findPath (trackName: String) -> String {
let n = Global.PlayList.find_by_trackName(trackName: trackName)
var path = ""
if (n >= 0) {
let p_item = Global.PlayList.PlaylistItems[n];
if (!p_item.fromData!) {
if (p_item.playing_url == nil) {
NetLib.makeTrackUrl(trackName: trackName, closure: self.playAVSound)
} else {
path = p_item.playing_url!
}
} else {
path = NetLib.makePath(filename: p_item.filename!)
}
} else {
print("Error find_path: \(trackName) was not found in playlist")
}
return path
}
And here is code, that changes song
#objc private func previous() -> MPRemoteCommandHandlerStatus{
var n = self.play_info.number! - 1;
if (self.play_info.number == 0) {
n = Global.PlayList.size() - 1;
}
NotificationCenter.default.post(name: NSNotification.Name(rawValue: "load"), object: nil)
_ = self.playAVSound(trackName: Global.PlayList.PlaylistItems[n].trackName!)
print("Previous song \(Global.PlayList.PlaylistItems[n].trackName!)")
return .success
}

I resolve trouble with controls while audio streaming. I play audio not from main thread, so remote controls work incorrect in that thread.

Related

Swift UIKit label text doesn't update / view doesn't update

I have a problem:
I have a list of items this is controller A, and when I click on any item I go to controller B (item info), I then execute the ledLightingButton_Tapped function by pressing the corresponding button that activates the LED indicator for the animal.
#IBAction func ledLightingButton_Tapped(_ sender: Any) {
if !GlobalData.shared.led_animals.contains(GlobalData.shared.selectedAnimalId) {
GlobalData.shared.led_animals.append(GlobalData.shared.selectedAnimalId)
}
activateLED(at: GlobalData.shared.selectedAnimalId)
}
func activateLED(at animalId: String) {
ServerSocket.shared?.perform(
op: "ActivateLED",
with: [
"light_duration": "180",
"led_color": "White",
"client_data": "",
"led_animals": [animalId]
]
) { err, data in
guard err == nil else { return }
print(data)
let ledStatus = data[0]["led_request_status"].stringValue
self.ledStatusLabel.text = ledStatus
GlobalData.shared.isActiveLED = true
self.startTimer()
}
}
Upon successful activation, the animal number is added to the array, and the startTimer is called which every 10 seconds requests checkLEDStatus for all animals in the array.
func startTimer() {
timer = Timer.scheduledTimer(timeInterval: 10.0, target: self, selector: #selector(updateCowStatus), userInfo: nil, repeats: true)
}
#objc func updateCowStatus() {
self.checkLEDStatus()
}
func checkLEDStatus() {
ServerSocket.shared?.perform(
op: "CheckStatusLED",
with: [
"light_duration": "180",
"led_color": "White",
"client_data": "",
"led_animals": GlobalData.shared.led_animals
]
) { err, data in
guard err == nil else {
GlobalData.shared.isActiveLED = false
self.stopTimer()
return
}
DispatchQueue.global(qos: .background).async {
for i in 0..<data.count {
if GlobalData.shared.selectedAnimalId == data[i]["animal_id"].stringValue {
let ledStatus = data[i]["led_request_status"].stringValue
if ledStatus.contains("Fail") {
guard let index = GlobalData.shared.led_animals.firstIndex(of: GlobalData.shared.selectedAnimalId) else { return }
GlobalData.shared.led_animals.remove(at: index)
}
DispatchQueue.main.async {
self.ledStatusLabel.text = ledStatus
}
}
}
}
}
}
The current status of the animal is displayed on the label. If you go in the controller A and activate the status + get a result from checkedLEDstatus - it is work for one animal - everything works, but if you go to controller B, activate for animal number 1, go out and open animal number 2 - perform activation, return to animal number 1 - then the label is no longer is updated, it does not display the new value, but I check it from debugging and property self.ledStatuslabel.text contains new value but UI didn't update. self.ledStatuslabel.text show old value.
Please help me, thanks!

Real-time AVAssetWriter synchronise audio and video when pausing/resuming

I am trying to record a video with sound using iPhone's front camera. As I need to also support pause/resume functionality, I need to use AVAssetWriter. I've found an example online, written in Objective-C, which almost achieves the desired functionality (http://www.gdcl.co.uk/2013/02/20/iPhone-Pause.html)
Unfortunately, after converting this example to Swift, I notice that if I pause/resume, at the end of each "section" there is a small but noticeable period during which the video is just a still frame and the audio is playing. So, it seems that when isPaused is triggered, the recorded audio track is longer than the recorded video track.
Sorry if it may seem like a noob question, but I am not a great expert in AVFoundation and some help would be appreciated!
Below I post my implementation of didOutput sampleBuffer.
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
var isVideo = true
if videoConntection != connection {
isVideo = false
}
if (!isCapturing || isPaused) {
return
}
if (encoder == nil) {
if isVideo {
return
}
if let fmt = CMSampleBufferGetFormatDescription(sampleBuffer) {
let desc = CMAudioFormatDescriptionGetStreamBasicDescription(fmt as CMAudioFormatDescription)
if let chan = desc?.pointee.mChannelsPerFrame, let rate = desc?.pointee.mSampleRate {
let path = tempPath()!
encoder = VideoEncoder(path: path, height: Int(cameraSize.height), width: Int(cameraSize.width), channels: chan, rate: rate)
}
}
}
if discont {
if isVideo {
return
}
discont = false
var pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
let last = lastAudio
if last.flags.contains(CMTimeFlags.valid) {
if cmOffset.flags.contains(CMTimeFlags.valid) {
pts = CMTimeSubtract(pts, cmOffset)
}
let off = CMTimeSubtract(pts, last)
print("setting offset from \(isVideo ? "video":"audio")")
print("adding \(CMTimeGetSeconds(off)) to \(CMTimeGetSeconds(cmOffset)) (pts \(CMTimeGetSeconds(cmOffset)))")
if cmOffset.value == 0 {
cmOffset = off
}
else {
cmOffset = CMTimeAdd(cmOffset, off)
}
}
lastVideo.flags = []
lastAudio.flags = []
return
}
var out:CMSampleBuffer?
if cmOffset.value > 0 {
var count:CMItemCount = CMSampleBufferGetNumSamples(sampleBuffer)
let pInfo = UnsafeMutablePointer<CMSampleTimingInfo>.allocate(capacity: count)
CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, entryCount: count, arrayToFill: pInfo, entriesNeededOut: &count)
var i = 0
while i<count {
pInfo[i].decodeTimeStamp = CMTimeSubtract(pInfo[i].decodeTimeStamp, cmOffset)
pInfo[i].presentationTimeStamp = CMTimeSubtract(pInfo[i].presentationTimeStamp, cmOffset)
i+=1
}
CMSampleBufferCreateCopyWithNewTiming(allocator: nil, sampleBuffer: sampleBuffer, sampleTimingEntryCount: count, sampleTimingArray: pInfo, sampleBufferOut: &out)
}
else {
out = sampleBuffer
}
var pts = CMSampleBufferGetPresentationTimeStamp(out!)
let dur = CMSampleBufferGetDuration(out!)
if (dur.value > 0)
{
pts = CMTimeAdd(pts, dur);
}
if (isVideo) {
lastVideo = pts;
}
else {
lastAudio = pts;
}
encoder?.encodeFrame(sampleBuffer: out!, isVideo: isVideo)
}
And this is my VideoEncoder class:
final class VideoEncoder {
var writer:AVAssetWriter
var videoInput:AVAssetWriterInput
var audioInput:AVAssetWriterInput
var path:String
init(path:String, height:Int, width:Int, channels:UInt32, rate:Float64) {
self.path = path
if FileManager.default.fileExists(atPath:path) {
try? FileManager.default.removeItem(atPath: path)
}
let url = URL(fileURLWithPath: path)
writer = try! AVAssetWriter(outputURL: url, fileType: .mp4)
videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: [
AVVideoCodecKey: AVVideoCodecType.h264,
AVVideoWidthKey:height,
AVVideoHeightKey:width
])
videoInput.expectsMediaDataInRealTime = true
writer.add(videoInput)
audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: [
AVFormatIDKey:kAudioFormatMPEG4AAC,
AVNumberOfChannelsKey:channels,
AVSampleRateKey:rate
])
audioInput.expectsMediaDataInRealTime = true
writer.add(audioInput)
}
func finish(with completionHandler:#escaping ()->Void) {
writer.finishWriting(completionHandler: completionHandler)
}
func encodeFrame(sampleBuffer:CMSampleBuffer, isVideo:Bool) -> Bool {
if CMSampleBufferDataIsReady(sampleBuffer) {
if writer.status == .unknown {
writer.startWriting()
writer.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
}
if writer.status == .failed {
QFLogger.shared.addLog(format: "[ERROR initiating AVAssetWriter]", args: [], error: writer.error)
return false
}
if isVideo {
if videoInput.isReadyForMoreMediaData {
videoInput.append(sampleBuffer)
return true
}
}
else {
if audioInput.isReadyForMoreMediaData {
audioInput.append(sampleBuffer)
return true
}
}
}
return false
}
}
The rest of the code should be pretty obvious, but just to make it complete, here is what I have for pausing:
isPaused = true
discont = true
And here is resume:
isPaused = false
If anyone could help me to understand how to align video and audio tracks during such live recording that would be great!
Ok, turns out there was no mistake in the code which I provided. The issue which I experienced was caused by a video smoothing which was turned ON :) I guess it needs extra frames to smooth the video, which is why the video output freezes at the end for a short period of time.

How to make the AKSequencer switch soundfonts?

I'm creating a function using the Audiokit API which the user presses music notes onto a screen and a sound comes out based on the SoundFont they chose. I then allow them to collect a host of notes and let them play it back in the order they chose.
The problem is that I am using an AKSequencer to play the notes back and when the AKSequencer plays the notes back it never sounds like the SoundFont. It makes a beep sound.
Is there code that lets me change what sound is coming out of the AKSequencer?
I'm using audio kit to do this.
Sample is an NSObject that contains midisampler, player, etc. Here's the code
class Sampler1: NSObject {
var engine = AVAudioEngine()
var sampler: AVAudioUnitSampler!
var midisampler = AKMIDISampler()
var octave = 4
let midiChannel = 0
var midiVelocity = UInt8(127)
var audioGraph: AUGraph?
var musicPlayer: MusicPlayer?
var patch = UInt32(0)
var synthUnit: AudioUnit?
var synthNode = AUNode()
var outputNode = AUNode()
override init() {
super.init()
// engine = AVAudioEngine()
sampler = AVAudioUnitSampler()
engine.attach(sampler)
engine.connect(sampler, to: engine.mainMixerNode, format: nil)
loadSF2PresetIntoSampler(5)
/* sampler2 = AVAudioUnitSampler()
engine.attachNode(sampler2)
engine.connect(sampler2, to: engine.mainMixerNode, format: nil)
*/
addObservers()
startEngine()
setSessionPlayback()
/* CheckError(NewAUGraph(&audioGraph))
createOutputNode(audioGraph: audioGraph!, outputNode: &outputNode)
createSynthNode()
CheckError(AUGraphNodeInfo(audioGraph!, synthNode, nil, &synthUnit))
let synthOutputElement: AudioUnitElement = 0
let ioUnitInputElement: AudioUnitElement = 0
CheckError(AUGraphConnectNodeInput(audioGraph!, synthNode, synthOutputElement,
outputNode, ioUnitInputElement))
CheckError(AUGraphInitialize(audioGraph!))
CheckError(AUGraphStart(audioGraph!))
loadnewSoundFont()
loadPatch(patchNo: 0)*/
setUpSequencer()
}
func createOutputNode(audioGraph: AUGraph, outputNode: UnsafeMutablePointer<AUNode>) {
var cd = AudioComponentDescription(
componentType: OSType(kAudioUnitType_Output),
componentSubType: OSType(kAudioUnitSubType_RemoteIO),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: 0,componentFlagsMask: 0)
CheckError(AUGraphAddNode(audioGraph, &cd, outputNode))
}
func loadSF2PresetIntoSampler(_ preset: UInt8) {
guard let bankURL = Bundle.main.url(forResource: "Arachno SoundFont - Version 1.0", withExtension: "sf2") else {
print("could not load sound font")
return
}
let folder = bankURL.path
do {
try self.sampler.loadSoundBankInstrument(at: bankURL,
program: preset,
bankMSB: UInt8(kAUSampler_DefaultMelodicBankMSB),
bankLSB: UInt8(kAUSampler_DefaultBankLSB))
try midisampler.loadSoundFont(folder, preset: 0, bank: kAUSampler_DefaultBankLSB)
// try midisampler.loadPath(bankURL.absoluteString)
} catch {
print("error loading sound bank instrument")
}
}
func createSynthNode() {
var cd = AudioComponentDescription(
componentType: OSType(kAudioUnitType_MusicDevice),
componentSubType: OSType(kAudioUnitSubType_MIDISynth),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: 0,componentFlagsMask: 0)
CheckError(AUGraphAddNode(audioGraph!, &cd, &synthNode))
}
func setSessionPlayback() {
let audioSession = AVAudioSession.sharedInstance()
do {
try
audioSession.setCategory(AVAudioSession.Category.playback, options:
AVAudioSession.CategoryOptions.mixWithOthers)
} catch {
print("couldn't set category \(error)")
return
}
do {
try audioSession.setActive(true)
} catch {
print("couldn't set category active \(error)")
return
}
}
func startEngine() {
if engine.isRunning {
print("audio engine already started")
return
}
do {
try engine.start()
print("audio engine started")
} catch {
print("oops \(error)")
print("could not start audio engine")
}
}
func addObservers() {
NotificationCenter.default.addObserver(self,
selector:"engineConfigurationChange:",
name:NSNotification.Name.AVAudioEngineConfigurationChange,
object:engine)
NotificationCenter.default.addObserver(self,
selector:"sessionInterrupted:",
name:AVAudioSession.interruptionNotification,
object:engine)
NotificationCenter.default.addObserver(self,
selector:"sessionRouteChange:",
name:AVAudioSession.routeChangeNotification,
object:engine)
}
func removeObservers() {
NotificationCenter.default.removeObserver(self,
name: NSNotification.Name.AVAudioEngineConfigurationChange,
object: nil)
NotificationCenter.default.removeObserver(self,
name: AVAudioSession.interruptionNotification,
object: nil)
NotificationCenter.default.removeObserver(self,
name: AVAudioSession.routeChangeNotification,
object: nil)
}
private func setUpSequencer() {
// set the sequencer voice to storedPatch so we can play along with it using patch
var status = NewMusicSequence(&musicSequence)
if status != noErr {
print("\(#line) bad status \(status) creating sequence")
}
status = MusicSequenceNewTrack(musicSequence!, &track)
if status != noErr {
print("error creating track \(status)")
}
// 0xB0 = bank select, first we do the most significant byte
var chanmess = MIDIChannelMessage(status: 0xB0 | sequencerMidiChannel, data1: 0, data2: 0, reserved: 0)
status = MusicTrackNewMIDIChannelEvent(track!, 0, &chanmess)
if status != noErr {
print("creating bank select event \(status)")
}
// then the least significant byte
chanmess = MIDIChannelMessage(status: 0xB0 | sequencerMidiChannel, data1: 32, data2: 0, reserved: 0)
status = MusicTrackNewMIDIChannelEvent(track!, 0, &chanmess)
if status != noErr {
print("creating bank select event \(status)")
}
// set the voice
chanmess = MIDIChannelMessage(status: 0xC0 | sequencerMidiChannel, data1: UInt8(0), data2: 0, reserved: 0)
status = MusicTrackNewMIDIChannelEvent(track!, 0, &chanmess)
if status != noErr {
print("creating program change event \(status)")
}
CheckError(MusicSequenceSetAUGraph(musicSequence!, audioGraph))
CheckError(NewMusicPlayer(&musicPlayer))
CheckError(MusicPlayerSetSequence(musicPlayer!, musicSequence))
CheckError(MusicPlayerPreroll(musicPlayer!))
}
func loadnewSoundFont() {
var bankURL = Bundle.main.url(forResource: "Arachno SoundFont - Version 1.0", withExtension: "sf2")
CheckError(AudioUnitSetProperty(synthUnit!, AudioUnitPropertyID(kMusicDeviceProperty_SoundBankURL), AudioUnitScope(kAudioUnitScope_Global), 0, &bankURL, UInt32(MemoryLayout<URL>.size)))
}
func loadPatch(patchNo: Int) {
let channel = UInt32(0)
var enabled = UInt32(1)
var disabled = UInt32(0)
patch = UInt32(patchNo)
CheckError(AudioUnitSetProperty(
synthUnit!,
AudioUnitPropertyID(kAUMIDISynthProperty_EnablePreload),
AudioUnitScope(kAudioUnitScope_Global),
0,
&enabled,
UInt32(MemoryLayout<UInt32>.size)))
let programChangeCommand = UInt32(0xC0 | channel)
CheckError(MusicDeviceMIDIEvent(self.synthUnit!, programChangeCommand, patch, 0, 0))
CheckError(AudioUnitSetProperty(
synthUnit!,
AudioUnitPropertyID(kAUMIDISynthProperty_EnablePreload),
AudioUnitScope(kAudioUnitScope_Global),
0,
&disabled,
UInt32(MemoryLayout<UInt32>.size)))
// the previous programChangeCommand just triggered a preload
// this one actually changes to the new voice
CheckError(MusicDeviceMIDIEvent(synthUnit!, programChangeCommand, patch, 0, 0))
}
func play(number: UInt8) {
sampler.startNote(number, withVelocity: 127, onChannel: 0)
}
func stop(number: UInt8) {
sampler.stopNote(number, onChannel: 0)
}
func musicPlayerPlay() {
var status = noErr
var playing:DarwinBoolean = false
CheckError(MusicPlayerIsPlaying(musicPlayer!, &playing))
if playing != false {
status = MusicPlayerStop(musicPlayer!)
if status != noErr {
print("Error stopping \(status)")
CheckError(status)
return
}
}
CheckError(MusicPlayerSetTime(musicPlayer!, 0))
CheckError(MusicPlayerStart(musicPlayer!))
}
var avsequencer: AVAudioSequencer!
var sequencerMode = 1
var sequenceStartTime: Date?
var noteOnTimes = [Date] (repeating: Date(), count:128)
var musicSequence: MusicSequence?
var midisequencer = AKSequencer()
// var musicPlayer: MusicPlayer?
let sequencerMidiChannel = UInt8(1)
var midisynthUnit: AudioUnit?
//track is the variable the notes are written on
var track: MusicTrack?
var newtrack: AKMusicTrack?
func setupSequencer(name: String) {
self.avsequencer = AVAudioSequencer(audioEngine: self.engine)
let options = AVMusicSequenceLoadOptions.smfChannelsToTracks
if let fileURL = Bundle.main.url(forResource: name, withExtension: "mid") {
do {
try avsequencer.load(from: fileURL, options: options)
print("loaded \(fileURL)")
} catch {
print("something screwed up \(error)")
return
}
}
avsequencer.prepareToPlay()
}
func playsequence() {
if avsequencer.isPlaying {
stopsequence()
}
avsequencer.currentPositionInBeats = TimeInterval(0)
do {
try avsequencer.start()
} catch {
print("cannot start \(error)")
}
}
func creatnewtrck(){
let sequencelegnth = AKDuration(beats: 8.0)
newtrack = midisequencer.newTrack()
}
func addnotestotrack(){
// AKMIDISampler
}
func stopsequence() {
avsequencer.stop()
}
func setSequencerMode(mode: Int) {
sequencerMode = mode
switch(sequencerMode) {
case SequencerMode.off.rawValue:
print(mode)
// CheckError(osstatus: MusicPlayerStop(musicPlayer!))
case SequencerMode.recording.rawValue:
print(mode)
case SequencerMode.playing.rawValue:
print(mode)
default:
break
}
}
/* func noteOn(note: UInt8) {
let noteCommand = UInt32(0x90 | midiChannel)
let base = note - 48
let octaveAdjust = (UInt8(octave) * 12) + base
let pitch = UInt32(octaveAdjust)
CheckError(MusicDeviceMIDIEvent(self.midisynthUnit!,
noteCommand, pitch, UInt32(self.midiVelocity), 0))
}
func noteOff(note: UInt8) {
let channel = UInt32(0)
let noteCommand = UInt32(0x80 | channel)
let base = note - 48
let octaveAdjust = (UInt8(octave) * 12) + base
let pitch = UInt32(octaveAdjust)
CheckError(MusicDeviceMIDIEvent(self.midisynthUnit!,
noteCommand, pitch, 0, 0))
}*/
func noteOn(note: UInt8) {
if sequencerMode == SequencerMode.recording.rawValue {
print("recording sequence note")
noteOnTimes[Int(note)] = Date()
} else {
print("no notes")
}
}
func noteOff(note: UInt8, timestamp: Float64, sequencetime: Date) {
if sequencerMode == SequencerMode.recording.rawValue {
let duration: Double = Date().timeIntervalSince(noteOnTimes[Int(note)])
let onset: Double = noteOnTimes[Int(note)].timeIntervalSince(sequencetime)
//the order of the notes in the array
var beat: MusicTimeStamp = 0
CheckError(MusicSequenceGetBeatsForSeconds(musicSequence!, onset, &beat))
var mess = MIDINoteMessage(channel: sequencerMidiChannel,
note: note,
velocity: midiVelocity,
releaseVelocity: 0,
duration: Float(duration) )
CheckError(MusicTrackNewMIDINoteEvent(track!, timestamp, &mess))
}
}
}
The code that plays the collection of notes
_ = sample.midisequencer.newTrack()
let sequencelegnth = AKDuration(beats: 8.0)
sample.midisequencer.setLength(sequencelegnth)
sample.sequenceStartTime = format.date(from: format.string(from: NSDate() as Date))
sample.midisequencer.setTempo(160.0)
sample.midisequencer.enableLooping()
sample.midisequencer.play()
This is the code that changes the soundfont
func loadSF2PresetIntoSampler(_ preset: UInt8) {
guard let bankURL = Bundle.main.url(forResource: "Arachno SoundFont - Version 1.0", withExtension: "sf2") else {
print("could not load sound font")
return
}
let folder = bankURL.path
do {
try self.sampler.loadSoundBankInstrument(at: bankURL,
program: preset,
bankMSB: UInt8(kAUSampler_DefaultMelodicBankMSB),
bankLSB: UInt8(kAUSampler_DefaultBankLSB))
try midisampler.loadSoundFont(folder, preset: 0, bank: kAUSampler_DefaultBankLSB)
// try midisampler.loadPath(bankURL.absoluteString)
} catch {
print("error loading sound bank instrument")
}
}
The midisampler is an AKMidisampler.
At minimum, you need to connect an AKSequencer to some kind of output to get it to make sounds. With the older version (now called AKAppleSequencer), if you don't explicitly set the output, you will hear the default (beepy) sampler.
For example, on AKAppleSequencer (in AudioKit 4.8, or AKSequencer for earlier version)
let track = seq.newTrack()
track!.setMIDIOutput(sampler.midiIn)
on the new AKSequencer
let track = seq.newTrack() // for the new AKSequencer, in AudioKit 4.8
track!.setTarget(node: sampler)
Also, make sure that you have allowed audio background mode in your project's Capabilities, as missing this step this will also get you the default sampler.
You've included a massive amount of code (and I haven't tried to absorb all of what is going on here) but the fact that you are using instances of both MusicSequence and AKSequencer (which I suspect is the older version, now called AKAppleSequencer, which is merely a wrapper around MusicSequence) is something of a red flag.

Swift AVFoundation Reading and Analyzing a file in real time

I am having trouble reading a file from the operating system using AVFoundation and performing rendering and analysis in real time.
I have a pipe line of code that I know runs in real time does analysis on a video file. This pipe line of code works in realtime via the camera session. However this is not the case when I read the file like so.Can anyone let me know where I might be going wrong ?
protocol VideoStreamTestBenchDelegate {
func frameBuffer(buffer:CMSampleBuffer)
}
class VideoStreamTestBench {
let asset:AVAsset
let assetReader:AVAssetReader
let playAtActualSpeed:Bool
let loop:Bool
var videoEncodingIsFinished = false
var previousFrameTime = kCMTimeZero
var previousActualFrameTime = CFAbsoluteTimeGetCurrent()
var numberOfFramesCaptured = 0
var totalFrameTimeDuringCapture:Double = 0.0
var delegate:VideoStreamTestBenchDelegate?
public convenience init(url:URL, playAtActualSpeed:Bool = false, loop:Bool = false) throws {
let inputOptions = [AVURLAssetPreferPreciseDurationAndTimingKey:NSNumber(value:true)]
let inputAsset = AVURLAsset(url:url, options:inputOptions)
try self.init(asset:inputAsset, playAtActualSpeed:playAtActualSpeed, loop:loop)
}
public init(asset:AVAsset, playAtActualSpeed:Bool = false, loop:Bool = false) throws {
self.asset = asset
self.playAtActualSpeed = playAtActualSpeed
self.loop = loop
assetReader = try AVAssetReader(asset:self.asset)
let outputSettings:[String:AnyObject] = [(kCVPixelBufferPixelFormatTypeKey as String):NSNumber(value:Int32(kCVPixelFormatType_32BGRA))]
let readerVideoTrackOutput = AVAssetReaderTrackOutput(track:self.asset.tracks(withMediaType: AVMediaTypeVideo)[0], outputSettings:outputSettings)
readerVideoTrackOutput.alwaysCopiesSampleData = false
assetReader.add(readerVideoTrackOutput)
// TODO: Audio here
}
public func start() {
asset.loadValuesAsynchronously(forKeys:["tracks"], completionHandler:{
DispatchQueue.global(priority:DispatchQueue.GlobalQueuePriority.default).async(execute: {
guard (self.asset.statusOfValue(forKey: "tracks", error:nil) == .loaded) else { return }
guard self.assetReader.startReading() else {
print("Couldn't start reading")
return
}
var readerVideoTrackOutput:AVAssetReaderOutput? = nil;
for output in self.assetReader.outputs {
if(output.mediaType == AVMediaTypeVideo) {
readerVideoTrackOutput = output;
}
}
while (self.assetReader.status == .reading) {
self.readNextVideoFrame(from:readerVideoTrackOutput!)
}
if (self.assetReader.status == .completed) {
self.assetReader.cancelReading()
if (self.loop) {
// TODO: Restart movie processing
} else {
self.endProcessing()
}
}
})
})
}
public func cancel() {
assetReader.cancelReading()
self.endProcessing()
}
func endProcessing() {
}
func readNextVideoFrame(from videoTrackOutput:AVAssetReaderOutput) {
if ((assetReader.status == .reading) && !videoEncodingIsFinished) {
if let sampleBuffer = videoTrackOutput.copyNextSampleBuffer() {
if (playAtActualSpeed) {
// Do this outside of the video processing queue to not slow that down while waiting
let currentSampleTime = CMSampleBufferGetOutputPresentationTimeStamp(sampleBuffer)
let differenceFromLastFrame = CMTimeSubtract(currentSampleTime, previousFrameTime)
let currentActualTime = CFAbsoluteTimeGetCurrent()
let frameTimeDifference = CMTimeGetSeconds(differenceFromLastFrame)
let actualTimeDifference = currentActualTime - previousActualFrameTime
if (frameTimeDifference > actualTimeDifference) {
usleep(UInt32(round(1000000.0 * (frameTimeDifference - actualTimeDifference))))
}
previousFrameTime = currentSampleTime
previousActualFrameTime = CFAbsoluteTimeGetCurrent()
}
DispatchQueue.global().sync {
self.delegate?.frameBuffer(buffer: sampleBuffer)
CMSampleBufferInvalidate(sampleBuffer)
}
} else {
if (!loop) {
videoEncodingIsFinished = true
if (videoEncodingIsFinished) {
self.endProcessing()
}
}
}
}
}
}
// This is the delegate
public func bufferReader(_ reader: BufferReader!, didGetNextVideoSample bufferRef: CMSampleBuffer!) {
// let posePoints:[Any] = self.visageBackend.posePoints(with: bufferRef)
// var regions:[Any]? = nil
//
// if (posePoints.count > 0) {
// regions = (self.luaBackend?.regions(forPosePoints: posePoints))!
// }
//
// // extract
// if(regions != nil) {
// let rois:[Any] = (self.luaBackend?.extractedRegionInfos(for: bufferRef, andRegions: regions))!
// print(rois)
// }
//
// self.dLibRenderEngine.render(with: bufferRef, andPoints: posePoints, andRegions: regions)
self.backgroundRenderQueue.async { [weak self] in
if self?.previewLayer?.isReadyForMoreMediaData == true {
self?.previewLayer?.enqueue(bufferRef!)
}
}
}

AudioServicesPlaySystemSound sound volume dependency

I'm making iOS keyboard extension.
To play click sound, I used AudioServicesPlaySystemSound.
However some users reported that the click sound sometimes depends on 'bell sound volume(bell icon)' and sometimes 'normal sound volume(speaker icon)'
I tested on Apple memo app and found there are cases about inconstant dependency.
Here is my code to init
func initTypeSound(soundIndex: Int) {
let bundle = NSBundle.mainBundle()
for var i = 0 ; i < MAX_TYPE_SOUND_COUNT ; i++ {
if let url = bundle.URLForResource(NSString(format: "type%d_%d", soundIndex, i) as String, withExtension: "wav") {
// file exist
var soundID : SystemSoundID = 0
AudioServicesCreateSystemSoundID(url, &soundID)
mTypeSoundIDs.insert(soundID, atIndex: i)
} else {
// no file
}
}
}
and code to play
func play(soundType: KKSoundType) {
if (!mHasPermission || !mIsSound) {
return
}
var session = AVAudioSession.sharedInstance()
let systemVolumn = session.outputVolume
if (systemVolumn == 0) {
return
}
var soundId : SystemSoundID!
switch (soundType) {
case .Type:
let rand = Int(arc4random_uniform(UInt32(mTypeSoundIDs.count)));
soundId = mTypeSoundIDs[rand];
break
case .Space:
soundId = mSpaceSoundID;
break
default:
return
}
if mIsSound {
AudioServicesPlaySystemSound(soundId)
}
}

Resources