Apologies if this has been posted before but I haven't had much luck searching around on this topic. I'm trying to build a morse code converter using Swift. As part of this I've made a function that accepts a string of dots and dashes, and will hopefully play the corresponding audio. I've already successfully loaded up 2 audio players for the short and long beeps.
I started by looping through the string and playing the corresponding sound for each character. However, that just played all the sounds in parallel. Now I'm trying to use dispatch_after, but still running into the same issue. My code is below.
func audioMorseMessage(message: String) {
var time = dispatch_time(DISPATCH_TIME_NOW, Int64(NSEC_PER_SEC))
for character in message.characters {
if String(character) == "-" {
dispatch_after(time,dispatch_get_main_queue()){
self.longBeep.play()
}
}
if String(character) == "." {
dispatch_after(time,dispatch_get_main_queue()){
self.shortBeep.play()
}
}
}
}
Is this the right way to approach this? Is there another way where I can concatenate the audio files during the loop (with small gaps placed between beeps) and then play back the entire file once the loop has completed? Thanks for any help in advance.
This seems like a great opportunity to use NSOperation and NSOperationQueue. I would recommend creating a serial queue and then loading your individual sound operations in sequence. The following code is not fully formed but is pretty close. Hopefully your dot and dash sound files already include the dot-lengthed space after each tone. If they don't then you will have to insert the additional spaces (pauses) yourself.
class LongBeep: NSOperation {
override func main() {
if self.cancelled { return }
print("L", terminator: "") // play long sound
}
}
class ShortBeep: NSOperation {
override func main() {
if self.cancelled { return }
print("S", terminator: "") // play short sound
}
}
class Pause: NSOperation {
override func main() {
if self.cancelled { return }
print(" pause ", terminator: "") // play empty sound or use actual delay
}
}
func audioMorseMessage(message: String) {
let queue = NSOperationQueue()
queue.name = "morse-player"
queue.maxConcurrentOperationCount = 1
message.characters.map{code in
switch code {
case "-": queue.addOperation(LongBeep())
case ".": queue.addOperation(ShortBeep())
case " ": queue.addOperation(Pause())
default: break
}
}
}
Related
I am using a for loop coupled with a DispatchQueue with async to incrementally increase playback volume over the course of a 5 or 10-minute duration.
How I am currently implementing it is:
for i in (0...(numberOfSecondsToFadeOut*timesChangePerSecond)) {
DispatchQueue.main.asyncAfter(deadline: .now() + Double(i)/Double(timesChangePerSecond)) {
if self.activityHasEnded {
NSLog("Activity has ended") //This will keep on printing
} else {
let volumeSetTo = originalVolume - (reductionAmount)*Float(i)
self.setVolume(volumeSetTo)
}
}
if self.activityHasEnded {
break
}
}
My goal is to have activityHasEnded to act as the breaker. The issue as noted in the comment is that despite using break, the NSLog will keep on printing over every period. What would be the better way to fully break out of this for loop that uses DispatchQueue.main.asyncAfter?
Updated: As noted by Rob, it makes more sense to use a Timer. Here is what I did:
self.fadeOutTimer = Timer.scheduledTimer(withTimeInterval: timerFrequency, repeats: true) { (timer) in
let currentVolume = self.getCurrentVolume()
if currentVolume > destinationVolume {
let volumeSetTo = currentVolume - reductionAmount
self.setVolume(volumeSetTo)
print ("Lowered volume to \(volumeSetTo)")
}
}
When the timer is no longer needed, I call self.fadeOutTimer?.invalidate()
You don’t want to use asyncAfter: While you could use DispatchWorkItem rendition (which is cancelable), you will end up with a mess trying to keep track of all of the individual work items. Worse, a series of individually dispatch items are going to be subject to “timer coalescing”, where latter tasks will start to clump together, no longer firing off at the desired interval.
The simple solution is to use a repeating Timer, which avoids coalescing and is easily invalidated when you want to stop it.
You can utilise DispatchWorkItem, which can be dispatch to a DispatchQueue asynchronously and can also be cancelled even after it was dispatched.
for i in (0...(numberOfSecondsToFadeOut*timesChangePerSecond)) {
let work = DispatchWorkItem {
if self.activityHasEnded {
NSLog("Activity has ended") //This will keep on printing
} else {
let volumeSetTo = originalVolume - (reductionAmount)*Float(i)
self.setVolume(volumeSetTo)
}
}
DispatchQueue.main.asyncAfter(deadline: .now() + Double(i)/Double(timesChangePerSecond), execute: work)
if self.activityHasEnded {
work.cancel() // cancel the async work
break // exit the loop
}
}
We're working on a SpriteKit game. In order to have more control over sound effects, we switched from using SKAudioNodes to having some AVAudioPlayers. While everything seems to be working well in terms of game play, frame rate, and sounds, we're seeing occasional error(?) messages in the console output when testing on physical devices:
... [general] __CFRunLoopModeFindSourceForMachPort returned NULL for mode 'kCFRunLoopDefaultMode' livePort: #####
It doesn't seem to really cause any harm when it happens (no sound glitches or hiccups in frame rate or anything), but not understanding exactly what the message means and why it's happening is making us nervous.
Details:
The game is all standard SpriteKit, all events driven by SKActions, nothing unusual there.
The uses of AVFoundation stuff are the following. Initialization of app sounds:
class Sounds {
let soundQueue: DispatchQueue
init() {
do {
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print(error.localizedDescription)
}
soundQueue = DispatchQueue.global(qos: .background)
}
func execute(_ soundActions: #escaping () -> Void) {
soundQueue.async(execute: soundActions)
}
}
Creating various sound effect players:
guard let player = try? AVAudioPlayer(contentsOf: url) else {
fatalError("Unable to instantiate AVAudioPlayer")
}
player.prepareToPlay()
Playing a sound effect:
let pan = stereoBalance(...)
sounds.execute {
if player.pan != pan {
player.pan = pan
}
player.play()
}
The AVAudioPlayers are all for short sound effects with no looping, and they get reused. We create about 25 players total, including multiple players for certain effects when they can repeat in quick succession. For a particular effect, we rotate through the players for that effect in a fixed sequence. We have verified that whenever a player is triggered, its isPlaying is false, so we're not trying to invoke play on something that's already playing.
The message isn't that often. Over the course of a 5-10 minute game with possibly thousands of sound effects, we see the message maybe 5-10 times.
The message seems to occur most commonly when a bunch of sound effects are being played in quick succession, but it doesn't feel like it's 100% correlated with that.
Not using the dispatch queue (i.e., having sounds.execute just call soundActions() directly) doesn't fix the issue (though that does cause the game to lag significantly). Changing the dispatch queue to some of the other priorities like .utility also doesn't affect the issue.
Making sounds.execute just return immediately (i.e., don't actually call the closure at all, so there's no play()) does eliminate the messages.
We did find the source code that's producing the message at this link:
https://github.com/apple/swift-corelibs-foundation/blob/master/CoreFoundation/RunLoop.subproj/CFRunLoop.c
but we don't understand it except at an abstract level, and are not sure how run loops are involved in the AVFoundation stuff.
Lots of googling has turned up nothing helpful. And as I indicated, it doesn't seem to be causing noticeable problems at all. It would be nice to know why it's happening though, and either how to fix it or to have certainty that it won't ever be an issue.
We're still working on this, but have experimented enough that it's clear how we should do things. Outline:
Use the scene's audioEngine property.
For each sound effect, make an AVAudioFile for reading the audio's URL from the bundle. Read it into an AVAudioPCMBuffer. Stick the buffers into a dictionary that's indexed by sound effect.
Make a bunch of AVAudioPlayerNodes. Attach() them to the audioEngine. Connect(playerNode, to: audioEngine.mainMixerNode). At the moment we're creating these dynamically, searching through our current list of player nodes to find one that's not playing and making a new one if there's none available. That's probably got more overhead than is needed, since we have to have callbacks to observe when the player node finishes whatever it's playing and set it back to a stopped state. We'll try switching to just a fixed maximum number of active sound effects and rotating through the players in order.
To play a sound effect, grab the buffer for the effect, find a non-busy playerNode, and do playerNode.scheduleBuffer(buffer, ...). And playerNode.play() if it's not currently playing.
I may update this with some more detailed code once we have things fully converted and cleaned up. We still have a couple of long-running AVAudioPlayers that we haven't switched to use AVAudioPlayerNode going through the mixer. But anyway, pumping the vast majority of sound effects through the scheme above has eliminated the error message, and it needs far less stuff sitting around since there's no duplication of the sound effects in-memory like we had before. There's a tiny bit of lag, but we haven't even tried putting some stuff on a background thread yet, and maybe not having to search for and constantly start/stop players would even eliminate it without having to worry about that.
Since switching to this approach, we've had no more runloop complaints.
Edit: Some example code...
import SpriteKit
import AVFoundation
enum SoundEffect: String, CaseIterable {
case playerExplosion = "player_explosion"
// lots more
var url: URL {
guard let url = Bundle.main.url(forResource: self.rawValue, withExtension: "wav") else {
fatalError("Sound effect file \(self.rawValue) missing")
}
return url
}
func audioBuffer() -> AVAudioPCMBuffer {
guard let file = try? AVAudioFile(forReading: self.url) else {
fatalError("Unable to instantiate AVAudioFile")
}
guard let buffer = AVAudioPCMBuffer(pcmFormat: file.processingFormat, frameCapacity: AVAudioFrameCount(file.length)) else {
fatalError("Unable to instantiate AVAudioPCMBuffer")
}
do {
try file.read(into: buffer)
} catch {
fatalError("Unable to read audio file into buffer, \(error.localizedDescription)")
}
return buffer
}
}
class Sounds {
var audioBuffers = [SoundEffect: AVAudioPCMBuffer]()
// more stuff
init() {
for effect in SoundEffect.allCases {
preload(effect)
}
}
func preload(_ sound: SoundEffect) {
audioBuffers[sound] = sound.audioBuffer()
}
func cachedAudioBuffer(_ sound: SoundEffect) -> AVAudioPCMBuffer {
guard let buffer = audioBuffers[sound] else {
fatalError("Audio buffer for \(sound.rawValue) was not preloaded")
}
return buffer
}
}
class Globals {
// Sounds loaded once and shared amount all scenes in the game
static let sounds = Sounds()
}
class SceneAudio {
let stereoEffectsFrame: CGRect
let audioEngine: AVAudioEngine
var playerNodes = [AVAudioPlayerNode]()
var nextPlayerNode = 0
// more stuff
init(stereoEffectsFrame: CGRect, audioEngine: AVAudioEngine) {
self.stereoEffectsFrame = stereoEffectsFrame
self.audioEngine = audioEngine
do {
try audioEngine.start()
let buffer = Globals.sounds.cachedAudioBuffer(.playerExplosion)
// We got up to about 10 simultaneous sounds when really pushing the game
for _ in 0 ..< 10 {
let playerNode = AVAudioPlayerNode()
playerNodes.append(playerNode)
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: buffer.format)
playerNode.play()
}
} catch {
logging("Cannot start audio engine, \(error.localizedDescription)")
}
}
func soundEffect(_ sound: SoundEffect, at position: CGPoint = .zero) {
guard audioEngine.isRunning else { return }
let buffer = Globals.sounds.cachedAudioBuffer(sound)
let playerNode = playerNodes[nextPlayerNode]
nextPlayerNode = (nextPlayerNode + 1) % playerNodes.count
playerNode.pan = stereoBalance(position)
playerNode.scheduleBuffer(buffer)
}
func stereoBalance(_ position: CGPoint) -> Float {
guard stereoEffectsFrame.width != 0 else { return 0 }
guard position.x <= stereoEffectsFrame.maxX else { return 1 }
guard position.x >= stereoEffectsFrame.minX else { return -1 }
return Float((position.x - stereoEffectsFrame.midX) / (0.5 * stereoEffectsFrame.width))
}
}
class GameScene: SKScene {
var audio: SceneAudio!
// lots more stuff
// somewhere in initialization
// gameFrame is the area where action takes place and which
// determines panning for stereo sound effects
audio = SceneAudio(stereoEffectsFrame: gameFrame, audioEngine: audioEngine)
func destroyPlayer(_ player: SKSpriteNode) {
audio.soundEffect(.playerExplosion, at: player.position)
// more stuff
}
}
This is an AudioKit question:
I am really new to AudioKit and audio in general.
My question is: How could I use AudioKit to create a sound that changes as I move my phone around? I already know how to get the gyro information so lets say I can take the gyro values between 0-10, zero being no movement and 10 being a lot of movement of the phone. I want to translate that into sounds that corresponds to how hard/quickly the phone is being moved. To start just move the sound higher in pitch as the speed increase, low pitch down at zero. Sounds easy yes?
I'm just not experienced enough to know which AudioKit class to use or how to use it to achieve my results.
Thank you!
Michael
You have to write your own AKOperationGenerator.
enum PitchEnvVCOSynthParameter: Int {
case frequency, gate
}
struct PitchEnvVCO {
static var frequency: AKOperation {
return AKOperation.parameters[PitchEnvVCOSynthParameter.frequency.rawValue]
}
static var gate: AKOperation {
return AKOperation.parameters[PitchEnvVCOSynthParameter.gate.rawValue]
}
}
extension AKOperationGenerator {
var frequency: Double {
get { return self.parameters[PitchEnvVCOSynthParameter.frequency.rawValue] }
set(newValue) { self.parameters[PitchEnvVCOSynthParameter.frequency.rawValue] = newValue }
}
var gate: Double {
get { return self.parameters[PitchEnvVCOSynthParameter.gate.rawValue] }
set(newValue) { self.parameters[PitchEnvVCOSynthParameter.gate.rawValue] = newValue }
}
}
and
let generator = AKOperationGenerator { parameters in
let oscillator = AKOperation.squareWave(
frequency: PitchEnvVCO.frequency
)
return oscillator
}
and then make your variable control the frequency
var vco1Freq: Double = 440.0
{
didSet {
generator.parameters[PitchEnvVCOSynthParameter.frequency.rawValue] = vco1Freq
}
}
Fetch the gyro data and make it control your variable like describes here
I'm dipping toes into RxSwift and would like to create a "streaming API" for one of my regular API calls.
My idea is to take the regular call (which already uses observables without any problems) and have a timer fire such calls and send the results on the same observable, so the view controller can update automatically, so instead of doing this (pseudocode follows):
func getLocations() -> Observable<[Location]> {
return Observable<[Location]>.create {
sink in
NSURLSession.sharedSession.rx_JSON(API.locationsRequest).map {
json in
return json.flatMap { Location($0) }
}
}
}
I'd like for this to happen (pseudocode follows):
func getLocations(interval: NSTimeInterval) -> Observable<[Location]> {
return Observable<[Location]>.create {
sink in
NSTimer(interval) {
NSURLSession.sharedSession.rx_JSON(API.locationsRequest).map {
json in
sink.onNext(json.flatMap { Location($0) })
}
}
}
}
Last thing I tried was adding an NSTimer to the mix, but I can't figure out how to take the reference to the sink and pass it around to the method called by the timer to actually send the events down the pipe, given that the handler for the timer must be on a standalone method. I tried throwing in the block timer extensions from BlocksKit but the timer was fired every second instead of being fired at the specified interval, which defeated the purpose.
I've also read about the Interval operator but I'm not sure it's the right way to go.
Any pointers on how to get this right?
The end goal would be to have the timer re-fire only after the previous call has finished (either success or fail).
You should do something like the code below:
func getLocations(interval: NSTimeInterval) -> Observable<[CLLocation]> {
return Observable<[CLLocation]>.create { observer in
let interval = 20.0
let getLocationDisposable = Observable<Int64>.interval(interval, scheduler: MainScheduler.instance)
.subscribe { (e: Event<Int64>) in
NSURLSession.sharedSession.rx_JSON(API.locationsRequest).map {
json in
observer.onNext(json.flatMap { Location($0) })
}
}
return AnonymousDisposable {
getLocationDisposable.dispose()
}
}
}
The code above fire every 20 seconds the API.locationsRequest and send the result on the same observable, Please note that you have to dispose the Interval when the maim observable dispose.
Unfortunately some RAC pieces does not offer SignalProducers, but Signals — like Action has a values field which is Signal. But for my logic I need the SignalProducer.
How can I convert Signal to SignalProducer?
toSignalProducer(toRACSignal(x)) does not seem to be a good solution
Currently I stopped on this extension:
extension Signal {
func toSignalProducer() -> SignalProducer<T, E> {
return SignalProducer { (sink, compositeDisposable) in
compositeDisposable.addDisposable(self.observe(sink))
}
}
}