AudioKit: change sound based upon gyro data / swing phone around? - ios

This is an AudioKit question:
I am really new to AudioKit and audio in general.
My question is: How could I use AudioKit to create a sound that changes as I move my phone around? I already know how to get the gyro information so lets say I can take the gyro values between 0-10, zero being no movement and 10 being a lot of movement of the phone. I want to translate that into sounds that corresponds to how hard/quickly the phone is being moved. To start just move the sound higher in pitch as the speed increase, low pitch down at zero. Sounds easy yes?
I'm just not experienced enough to know which AudioKit class to use or how to use it to achieve my results.
Thank you!
Michael

You have to write your own AKOperationGenerator.
enum PitchEnvVCOSynthParameter: Int {
case frequency, gate
}
struct PitchEnvVCO {
static var frequency: AKOperation {
return AKOperation.parameters[PitchEnvVCOSynthParameter.frequency.rawValue]
}
static var gate: AKOperation {
return AKOperation.parameters[PitchEnvVCOSynthParameter.gate.rawValue]
}
}
extension AKOperationGenerator {
var frequency: Double {
get { return self.parameters[PitchEnvVCOSynthParameter.frequency.rawValue] }
set(newValue) { self.parameters[PitchEnvVCOSynthParameter.frequency.rawValue] = newValue }
}
var gate: Double {
get { return self.parameters[PitchEnvVCOSynthParameter.gate.rawValue] }
set(newValue) { self.parameters[PitchEnvVCOSynthParameter.gate.rawValue] = newValue }
}
}
and
let generator = AKOperationGenerator { parameters in
let oscillator = AKOperation.squareWave(
frequency: PitchEnvVCO.frequency
)
return oscillator
}
and then make your variable control the frequency
var vco1Freq: Double = 440.0
{
didSet {
generator.parameters[PitchEnvVCOSynthParameter.frequency.rawValue] = vco1Freq
}
}
Fetch the gyro data and make it control your variable like describes here

Related

"__CFRunLoopModeFindSourceForMachPort returned NULL" messages when using AVAudioPlayer

We're working on a SpriteKit game. In order to have more control over sound effects, we switched from using SKAudioNodes to having some AVAudioPlayers. While everything seems to be working well in terms of game play, frame rate, and sounds, we're seeing occasional error(?) messages in the console output when testing on physical devices:
... [general] __CFRunLoopModeFindSourceForMachPort returned NULL for mode 'kCFRunLoopDefaultMode' livePort: #####
It doesn't seem to really cause any harm when it happens (no sound glitches or hiccups in frame rate or anything), but not understanding exactly what the message means and why it's happening is making us nervous.
Details:
The game is all standard SpriteKit, all events driven by SKActions, nothing unusual there.
The uses of AVFoundation stuff are the following. Initialization of app sounds:
class Sounds {
let soundQueue: DispatchQueue
init() {
do {
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print(error.localizedDescription)
}
soundQueue = DispatchQueue.global(qos: .background)
}
func execute(_ soundActions: #escaping () -> Void) {
soundQueue.async(execute: soundActions)
}
}
Creating various sound effect players:
guard let player = try? AVAudioPlayer(contentsOf: url) else {
fatalError("Unable to instantiate AVAudioPlayer")
}
player.prepareToPlay()
Playing a sound effect:
let pan = stereoBalance(...)
sounds.execute {
if player.pan != pan {
player.pan = pan
}
player.play()
}
The AVAudioPlayers are all for short sound effects with no looping, and they get reused. We create about 25 players total, including multiple players for certain effects when they can repeat in quick succession. For a particular effect, we rotate through the players for that effect in a fixed sequence. We have verified that whenever a player is triggered, its isPlaying is false, so we're not trying to invoke play on something that's already playing.
The message isn't that often. Over the course of a 5-10 minute game with possibly thousands of sound effects, we see the message maybe 5-10 times.
The message seems to occur most commonly when a bunch of sound effects are being played in quick succession, but it doesn't feel like it's 100% correlated with that.
Not using the dispatch queue (i.e., having sounds.execute just call soundActions() directly) doesn't fix the issue (though that does cause the game to lag significantly). Changing the dispatch queue to some of the other priorities like .utility also doesn't affect the issue.
Making sounds.execute just return immediately (i.e., don't actually call the closure at all, so there's no play()) does eliminate the messages.
We did find the source code that's producing the message at this link:
https://github.com/apple/swift-corelibs-foundation/blob/master/CoreFoundation/RunLoop.subproj/CFRunLoop.c
but we don't understand it except at an abstract level, and are not sure how run loops are involved in the AVFoundation stuff.
Lots of googling has turned up nothing helpful. And as I indicated, it doesn't seem to be causing noticeable problems at all. It would be nice to know why it's happening though, and either how to fix it or to have certainty that it won't ever be an issue.
We're still working on this, but have experimented enough that it's clear how we should do things. Outline:
Use the scene's audioEngine property.
For each sound effect, make an AVAudioFile for reading the audio's URL from the bundle. Read it into an AVAudioPCMBuffer. Stick the buffers into a dictionary that's indexed by sound effect.
Make a bunch of AVAudioPlayerNodes. Attach() them to the audioEngine. Connect(playerNode, to: audioEngine.mainMixerNode). At the moment we're creating these dynamically, searching through our current list of player nodes to find one that's not playing and making a new one if there's none available. That's probably got more overhead than is needed, since we have to have callbacks to observe when the player node finishes whatever it's playing and set it back to a stopped state. We'll try switching to just a fixed maximum number of active sound effects and rotating through the players in order.
To play a sound effect, grab the buffer for the effect, find a non-busy playerNode, and do playerNode.scheduleBuffer(buffer, ...). And playerNode.play() if it's not currently playing.
I may update this with some more detailed code once we have things fully converted and cleaned up. We still have a couple of long-running AVAudioPlayers that we haven't switched to use AVAudioPlayerNode going through the mixer. But anyway, pumping the vast majority of sound effects through the scheme above has eliminated the error message, and it needs far less stuff sitting around since there's no duplication of the sound effects in-memory like we had before. There's a tiny bit of lag, but we haven't even tried putting some stuff on a background thread yet, and maybe not having to search for and constantly start/stop players would even eliminate it without having to worry about that.
Since switching to this approach, we've had no more runloop complaints.
Edit: Some example code...
import SpriteKit
import AVFoundation
enum SoundEffect: String, CaseIterable {
case playerExplosion = "player_explosion"
// lots more
var url: URL {
guard let url = Bundle.main.url(forResource: self.rawValue, withExtension: "wav") else {
fatalError("Sound effect file \(self.rawValue) missing")
}
return url
}
func audioBuffer() -> AVAudioPCMBuffer {
guard let file = try? AVAudioFile(forReading: self.url) else {
fatalError("Unable to instantiate AVAudioFile")
}
guard let buffer = AVAudioPCMBuffer(pcmFormat: file.processingFormat, frameCapacity: AVAudioFrameCount(file.length)) else {
fatalError("Unable to instantiate AVAudioPCMBuffer")
}
do {
try file.read(into: buffer)
} catch {
fatalError("Unable to read audio file into buffer, \(error.localizedDescription)")
}
return buffer
}
}
class Sounds {
var audioBuffers = [SoundEffect: AVAudioPCMBuffer]()
// more stuff
init() {
for effect in SoundEffect.allCases {
preload(effect)
}
}
func preload(_ sound: SoundEffect) {
audioBuffers[sound] = sound.audioBuffer()
}
func cachedAudioBuffer(_ sound: SoundEffect) -> AVAudioPCMBuffer {
guard let buffer = audioBuffers[sound] else {
fatalError("Audio buffer for \(sound.rawValue) was not preloaded")
}
return buffer
}
}
class Globals {
// Sounds loaded once and shared amount all scenes in the game
static let sounds = Sounds()
}
class SceneAudio {
let stereoEffectsFrame: CGRect
let audioEngine: AVAudioEngine
var playerNodes = [AVAudioPlayerNode]()
var nextPlayerNode = 0
// more stuff
init(stereoEffectsFrame: CGRect, audioEngine: AVAudioEngine) {
self.stereoEffectsFrame = stereoEffectsFrame
self.audioEngine = audioEngine
do {
try audioEngine.start()
let buffer = Globals.sounds.cachedAudioBuffer(.playerExplosion)
// We got up to about 10 simultaneous sounds when really pushing the game
for _ in 0 ..< 10 {
let playerNode = AVAudioPlayerNode()
playerNodes.append(playerNode)
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: buffer.format)
playerNode.play()
}
} catch {
logging("Cannot start audio engine, \(error.localizedDescription)")
}
}
func soundEffect(_ sound: SoundEffect, at position: CGPoint = .zero) {
guard audioEngine.isRunning else { return }
let buffer = Globals.sounds.cachedAudioBuffer(sound)
let playerNode = playerNodes[nextPlayerNode]
nextPlayerNode = (nextPlayerNode + 1) % playerNodes.count
playerNode.pan = stereoBalance(position)
playerNode.scheduleBuffer(buffer)
}
func stereoBalance(_ position: CGPoint) -> Float {
guard stereoEffectsFrame.width != 0 else { return 0 }
guard position.x <= stereoEffectsFrame.maxX else { return 1 }
guard position.x >= stereoEffectsFrame.minX else { return -1 }
return Float((position.x - stereoEffectsFrame.midX) / (0.5 * stereoEffectsFrame.width))
}
}
class GameScene: SKScene {
var audio: SceneAudio!
// lots more stuff
// somewhere in initialization
// gameFrame is the area where action takes place and which
// determines panning for stereo sound effects
audio = SceneAudio(stereoEffectsFrame: gameFrame, audioEngine: audioEngine)
func destroyPlayer(_ player: SKSpriteNode) {
audio.soundEffect(.playerExplosion, at: player.position)
// more stuff
}
}

How to build an accurate iPhone strobe light using swift

I am trying to build video stroboscopy app to power the light source on an otolaryngolgy endoscope. https://youtu.be/mJedwz_r2Pc shows an example of what a traditional stroboscopy system does. It flashes at 0.5 HZ below the fundamental frequency of the patient to induce a slow-motion effect that allows clinicians to visualize the motion of the cords and mucosal wave. To do this I need to strobe at roughly 120 to 250 HZ.
I have used the printed functions with counters to verify my frequencies. When I comment out my code connecting the functions to the torch, I get an accurate frequency. When I uncomment the torch code, I lose accuracy. I do not understand why the torch functions are slowing down the strobe. Any insight or help would be greatly appreciated.
class StrobeLights {
var counter: Int = 0
var timer: Timer
var isStrobing: Bool
var isLightOn: Bool
var frequency: Double
var start = DispatchTime.now()
var end = DispatchTime.now()
var active: Bool
init (){
self.counter = 0
self.timer = Timer()
self.isStrobing = false
self.isLightOn = false
self.frequency = 200
self.active = false
}
// Start Strobe process
func toggleStrobe () {
if isLightOn == true {
self.isLightOn = false
self.timer.invalidate()
print("Turning timer off")
self.end = DispatchTime.now()
let nanoTime = end.uptimeNanoseconds - start.uptimeNanoseconds
let timeInterval = Double(nanoTime) / 1_000_000_000
print("I counted this high \(counter) in this many seconds \(timeInterval) ")
//toggleTorch(on: false)
counter = 0
incrementCounter()
} else {
self.isLightOn = true
// change made by removing frequecy --> 10
self.timer = Timer.scheduledTimer(timeInterval: 1/frequency, target: self, selector: #selector(incrementCounter), userInfo: nil, repeats: true)
print("Turning timer on")
self.start = DispatchTime.now()
//toggleTorch(on: true)
}
}
// Increase counter by one
#objc func incrementCounter () {
self.toggleTorch(on: false)
self.counter += 1
//print("\(self.counter)")
self.toggleTorch(on: true)
}
// Turns light on or off
#objc func toggleTorch(on: Bool ) {
guard let device = AVCaptureDevice.default(for: AVMediaType.video) else { return }
if device.hasTorch {
if device.isTorchAvailable {
do {
try device.lockForConfiguration()
if on == true {
do {
try device.setTorchModeOn(level: 0.5)
} catch { print("Could not set torch level") }
device.torchMode = .on
} else {
device.torchMode = .off
}
device.unlockForConfiguration()
} catch {
print("Torch could not be used")
}
} else {
print( "torch unavailable")
}
} else {
print("torch unavailable")
}
}
}
Here are some things I would try:
Getting the AVCaptureDevice and locking its configuration every time you want to toggle the torch is certainly wasting time. Get the device once, put it in an ivar, and lock its configuration once, rather than on every call to toggleTorch(on:).
AVFoundation is at least somewhat multithread-capable, so possibly you could async-dispatch the call to setTorchModeOn to a non-main queue. I have no idea if this is safe or not, but it's worth a try.
Use a DispatchSourceTimer instead of an NSTimer, with the .strict option and a minimal leeway. The system will make more of an effort to call you on time.
On the other hand, I won't be surprised if those don't help. I don't think the iPhone torch is meant to be used as a 120 Hz strobe.
You may recall the recent kerfuffle about iOS throttling CPU speed on some iPhones with old batteries. iOS does this because otherwise, sudden bursts of activity can try to draw so much power that the battery can't keep up, and the system abruptly shuts off. So we can guess that before turning on the torch, iOS checks the battery level and recent power consumption and perhaps shuts off or slows down some parts of the system (software and hardware) to “make room” for the torch.
Also, from the isTorchAvailable documentation:
The torch may become unavailable if, for example, the device overheats and needs to cool off.
So we can also guess that iOS checks the hardware temperature before turning on the torch.
These checks and actions take some amount of time to execute; possibly several milliseconds. So with this knowledge, it's not surprising that iOS cannot flash the torch at 120 Hz or more.
There's also the question of whether the torch can physically cycle that quickly. When used as a steady-on torch, it doesn't need to turn on or off particularly fast. When used as a camera flash, it is powered by a capacitor that takes time to charge and then delivers a burst of power.

Extension in Swift with CoreMedia data types

Since yesterday I though I understand how to write extensions in Swift but now I'm not really sure. I have simple extension for CMTimebase class.
extension CMTimebase {
class func instance(withMasterClock masterClock: CMClock) -> CMTimebase {
var timebase: CMTimebase? = nil
CMTimebaseCreateWithMasterClock(kCFAllocatorDefault, masterClock, &timebase)
return timebase!
}
var rate: Float64 {
set {
CMTimebaseSetRate(self, rate)
print(self)
}
get {
return CMTimebaseGetRate(self)
}
}
var seconds: Float64 {
return CMTimeGetSeconds(CMTimebaseGetTime(self))
}
}
In my code I'm using it like that:
let timebase = CMTimebase.instance(withMasterClock:captureSession.masterClock)
timebase.rate = 1.0
And I wonder why this is not working because when I do print(timebase) the rate is set to 0.0 after invocation. What is really funny when I invoke:
CMTimebaseSetRate(timebase, 1.0)
Then rate is set to 1.0 as expected. I think the problem is not related only to CoreMedia or maybe I don't understand some fundamental concepts behind extension. Please help if You know how to correct my extension. I'm trying to target Swift 3.
Ok. It's really stupid error from my side. I need break. There is a typo in extension. It should be instead of rate, newValue.
var rate: Float64 {
set {
// CMTimebaseSetRate(self, rate)
// Line below is correct
CMTimebaseSetRate(self, newValue)
}
get {
return CMTimebaseGetRate(self)
}
}

Sequencing sounds with a delay in Swift

Apologies if this has been posted before but I haven't had much luck searching around on this topic. I'm trying to build a morse code converter using Swift. As part of this I've made a function that accepts a string of dots and dashes, and will hopefully play the corresponding audio. I've already successfully loaded up 2 audio players for the short and long beeps.
I started by looping through the string and playing the corresponding sound for each character. However, that just played all the sounds in parallel. Now I'm trying to use dispatch_after, but still running into the same issue. My code is below.
func audioMorseMessage(message: String) {
var time = dispatch_time(DISPATCH_TIME_NOW, Int64(NSEC_PER_SEC))
for character in message.characters {
if String(character) == "-" {
dispatch_after(time,dispatch_get_main_queue()){
self.longBeep.play()
}
}
if String(character) == "." {
dispatch_after(time,dispatch_get_main_queue()){
self.shortBeep.play()
}
}
}
}
Is this the right way to approach this? Is there another way where I can concatenate the audio files during the loop (with small gaps placed between beeps) and then play back the entire file once the loop has completed? Thanks for any help in advance.
This seems like a great opportunity to use NSOperation and NSOperationQueue. I would recommend creating a serial queue and then loading your individual sound operations in sequence. The following code is not fully formed but is pretty close. Hopefully your dot and dash sound files already include the dot-lengthed space after each tone. If they don't then you will have to insert the additional spaces (pauses) yourself.
class LongBeep: NSOperation {
override func main() {
if self.cancelled { return }
print("L", terminator: "") // play long sound
}
}
class ShortBeep: NSOperation {
override func main() {
if self.cancelled { return }
print("S", terminator: "") // play short sound
}
}
class Pause: NSOperation {
override func main() {
if self.cancelled { return }
print(" pause ", terminator: "") // play empty sound or use actual delay
}
}
func audioMorseMessage(message: String) {
let queue = NSOperationQueue()
queue.name = "morse-player"
queue.maxConcurrentOperationCount = 1
message.characters.map{code in
switch code {
case "-": queue.addOperation(LongBeep())
case ".": queue.addOperation(ShortBeep())
case " ": queue.addOperation(Pause())
default: break
}
}
}

Trying to build something with what I'd call a 'living model' in Swift. Think I might be doing this wrong

I'm trying to build what I'm going to describe as a 'living model'. Imagine I have a virtual creature, which has an energy attribute which slowly goes down or up over time, depending on it's current activity. And once it goes down to a certain level, it naturally goes to sleep. Then once it goes back up to a certain level, it wakes up naturally. And it might have an exhausted attribute, which is true if that energy is under its natural sleep level, and if it's also awake. These attributes changing - the activity, whether the creature is exhausted, all affect the appearance of the creature, and that appearance needs to know to change whenever those things change. Exhausted doesn't just change after a delay though, it changes when 'energy' reaches a certain point, or when the activity changes.
So you can see there's a few different concepts all working together, and doing this using regular Swift programming is giving me a knot which is currently quite loose, but slowly growing tighter and more complex.
So I need some advice on how to handle this in a way that isn't going to cause headaches and difficult-to-find issues.
You can set the properties to represent the various points that things happen to the creature. Then, as Mr Beardsley suggests, use didSet on your energy property, and compare its value to the "action points". Finally, you can use NSTimer to reset the energy property at regular intervals.
This allows you to create several different creature instances with unique energy levels for which they fall asleep or become exhausted.
enum MonsterState {
case Awake, Asleep, Exhausted
}
class Monster {
var monsterState = MonsterState.Awake
let drainRate: Int
let pointOfExhaustion: Int
var energy: Int {
didSet {
if energy <= pointOfExhaustion {
monsterState = .Exhausted
} else if energy <= 0 {
monsterState = .Asleep
}
}
}
init(energy: Int, pointOfExhaustion: Int, drainRate: Int) {
self.energy = energy
self.pointOfExhaustion = pointOfExhaustion
self.drainRate = drainRate
}
func weaken() {
NSTimer.scheduledTimerWithTimeInterval(1.0,
target: self, selector: "drainEnergyWithTimer:",
userInfo: ["pointsPerSecond": drainRate], repeats: true)
}
func drainEnergyWithTimer(timer: NSTimer) {
if let passedInfo = timer.userInfo as? [NSObject: AnyObject]{
let energyDecrease = passedInfo["pointsPerSecond"] as! Int
energy -= energyDecrease
}
if energy <= 0 {
timer.invalidate()
}
}
}
let godzilla = Monster(energy: 100, pointOfExhaustion: 12, drainRate: 3)
let mothra = Monster(energy: 150, pointOfExhaustion: 25, drainRate: 2)
godzilla.weaken()
mothra.weaken()
Here is a implementation you can work with to get you started. Note, I used a struct, but you might change that to a class and inherit from SKSpriteNode if you are using Sprite Kit for your game.
enum CreatureState: Int {
case Active = 50
case Sleeping = 20
case Exhausted = 10
}
struct Creature {
var energy: Int {
didSet {
switch self.state {
case .Active:
if energy < CreatureState.Active.rawValue {
self.state = .Sleeping
}
else if energy < CreatureState.Sleeping.rawValue {
self.state = .Exhausted
}
case .Sleeping:
if energy > CreatureState.Sleeping.rawValue {
self.state = .Active
}
case .Exhausted:
if energy > CreatureState.Active.rawValue {
self.state = .Active
}
else if energy > CreatureState.Active.rawValue {
self.state = .Sleeping
}
}
}
}
var state: CreatureState
init(energyLevel: Int, state: CreatureState) {
self.energy = energyLevel
self.state = state
}
}
I modeled the different states of your creature as an Enumeration with associated values. You can change those to whatever values mark the change from one state to another.
By using the 'didSet' property observer on energy, it is possible to perform actions any time a new value for energy is set. Overall, we are able to model your requirements with only 2 properties.

Resources