I am setting up a start application that on start up, will generate some white noise using AudioKit.
I have set up the following code that gets called on start up of my application:
let engine = AudioEngine()
let noise = WhiteNoise()
let mixer = Mixer(noise)
mixer.volume = 1
engine.output = mixer
try! engine.start()
But when I start up the application I do not hear any sound being generated. I set up a simple example to generate a sine wave using AVFoundation and I was able to hear the sound generated from my simulator.
I found an old thread - AudioKit - no sound output but I checked the AudioKit repo and it looks like this feature was removed a couple months back since it was not being used.
Any help would be appreciated!
Try noise.start() Generators don't default to being on.
Related
I'm working on an iOS/Flutter application, and am trying to work out if it's possible to play audio from the Music library on iOS with audio modifications (e. g. equalization settings) applied.
It seems like I'm looking for a solution that can work with MPMusicPlayerController, since that appears to be the strategy for playing local audio from the user's iOS Music library. I can find examples of applying EQ to audio on iOS (e. g. using AVAudioUnitEQ and AVAudioEngine: SO link, tutorial), but I'm unable to find any resources to help me understand if it's possible to bridge the gap between these resources.
Flutter specific context:
There are Flutter plugins that provide some of the functionality I'm looking for, but don't appear to work together. For example, the just_audio plugin has a robust set of features for modifying audio, but does not work with the local Music application on iOS/MPMusicPlayerController. Other plugins that do work with MPMusicPlayerController, like playify, do not have the ability to modify/transform the audio.
Even though I'm working with Flutter, any general advice on the iOS side would be very helpful. I appreciate any insight someone with more knowledge may be able to share with me!
Updating with my own answer here for future people: It looks like my only path forward (for now) is leaning into into AVAudioEngine directly. This is the rough POC that worked for me:
var audioPlayer = AVAudioPlayerNode()
var audioEngine = AVAudioEngine()
var eq = AVAudioUnitEQ()
let mediaItemCollection: [MPMediaItem] = MPMediaQuery.songs().items!
let song = mediaItemCollection[0]
do {
let file = try AVAudioFile(forReading: song.assetURL!)
audioEngine.attach(audioPlayer)
audioEngine.attach(eq)
audioEngine.connect(audioPlayer, to: eq, format: nil)
audioEngine.connect(eq, to: audioEngine.outputNode, format: file.processingFormat)
audioPlayer.scheduleFile(file, at: nil)
try audioEngine.start()
audioPlayer.play()
} catch {
// catch
}
The trickiest part for me was working out how to bridge together the "Music library/MPMediaItem" world to "AVAudioEngine" world -- which was just AVAudioFile(forReading: song.assetURL!)
Trying to use AudioKit 5 to dynamically create a player with a mixer, and attach it to a main mixer. I'd like the resulting chain to look like:
AudioPlayer -> Mixer(for player) -> Mixer(for output) -> AudioEngine.output
My example repo is here: https://github.com/hoopes/AK5Test1
You can see in the main file here (https://github.com/hoopes/AK5Test1/blob/main/AK5Test1/AK5Test1App.swift) that there are three functions.
The first works, where an mp3 is played on a Mixer that is created when the controller class is created.
The second works, where a newly created AudioPlayer is hooked directly to the outputMixer.
However, the third, where I try to set up the chain above, does not, and crashes with the "player started when in a disconnected state" error. I've copied the function here:
/** Try to add a mixer with a player to the main mixer */
func doesNotWork() {
let p2 = AudioPlayer()
let localMixer = Mixer()
localMixer.addInput(p2)
outputMixer.addInput(localMixer)
playMp3(p: p2)
}
Where playMp3 just plays an example mp3 on the AudioPlayer.
I'm not sure how I'm misusing the Mixer. In my actual application, I have a longer chain of mixers/boosters/etc, and getting the same error, which led me to create the simple test app.
If anyone can offer advice, I'd love to hear it. Thanks so much!
In your case, you can swap outputMixer.addInput(localMixer) and localMixer.addInput(p2) then it works
Once you have started the engine: work backwards from the output with your audio chain connections. So, your problem was that you attached a player to a mixer that was disconnected from the output. You needed to first attach the output to the mixer and then attach that mixer to the player.
The advice I wound up getting from an AudioKit contributor was to do everything possible to create all audio objects that you need up front, and dynamically change their volume to "connect" and "disconnect", so to speak.
Imagine you have a piano app (a contrived example, but hopefully gets the point across) - instead of creating a player when a key is pressed, connecting it, and disconnecting/disposing when the note is complete, create a player for every key on startup, and deal with them dynamically - this prevents any weirdness with "disconnected state", etc.
This has been working pretty flawlessly for me since.
I'm trying to perform frequency modulation on a signal coming from AKPlayer, which in return plays a mp3 file.
I've tried to work with AKOperationEffect, but it doesn't work as expected:
let modulatedPlayer = AKOperationEffect(player) { player, _ in
let oscillator = AKOperation.fmOscillator(baseFrequency: modulationFrequency,
carrierMultiplier: player.toMono(),
modulatingMultiplier: 100,
modulationIndex: 0,
amplitude: 1)
return oscillator
}
Has anybody an idea how to get the mp3 modulated?
Unfortunately, the AudioKit API is not so well documented .... there are a tons of examples, but they all deal with synthetic sounds such as sine, square waves etc.
I took the time to create a working practical example to help you #Ulrich, you can drop and play if you have the playground environment available, or just use it as a reference trusting me it works to amend your code, it's self-explanatory but you can read more about why my version work after the code TLDR;
Before <audio>
After <audio>
The following was tested and ran without problems in the latest XCode and Swift at the time of writing (XCode 11.4, Swift 5.2 and AudioKit 4.9.5):
import AudioKitPlaygrounds
import AudioKit
let audiofile = try! AKAudioFile(readFileName: "demo.mp3")
let player = AKPlayer(audioFile: audiofile)
let generator = AKOperationEffect(player) { player, _ in
let oscillator = AKOperation.fmOscillator(baseFrequency: 400,
carrierMultiplier: player.toMono(),
modulatingMultiplier: 100,
modulationIndex: 0,
amplitude: 1)
return oscillator
}
AudioKit.output = generator
try! AudioKit.start()
player.play()
generator.start()
Find the playground ready to use in the Download page ( https://audiokit.io/downloads/ )
As you can see, apart from declaring a path to the mp3 file when initializing a new AKAudioFile and passing to an AKPlayer instance, there are three steps that you need to occur in a certain order:
1) Assign an `AKNode` to the AudioKit output
2) Start the AudioKit engine
3) Start the `player` to generate output
4) Start the generator to moderate your sound
The best way to understand why this is to forget about code for a bit and imagine patching things in the real world; and finally, try to imagine the audio flow.
Hope this helps you and future readers!
I'm trying to play instrument of several .wav samples using AudioKit.
I've tried so far:
Using AKSampler (with underlying AVAudioUnitSampler) – it worked fine, but I can't figure out how to control ADSR envelope here – calling stop will stop note immediately.
Another way is to use AKSamplePlayer for each sample and play it, manually setting rate so it play the right note. I can (possibly?) then connect AKAmplitudeEnvelope to each sample player. But if I want to play 5 notes of the same sample simultaneously, I would need 5 instances of AKSamplePlayer, which seems like wasting resources.
I also tried to find a way to just push raw audio samples to the AudioKit output buffer, making mixing and sample interpolation by myself (in C, probably?). But didn't find how to do it :(
What is the right way to make a multi-sampled instrument using AudioKit? I feel like it must be a fairly simple task.
Thanks to mahal tertin, it's pretty easy to use AKAUPresetBuilder!
You can create .aupreset file somewhere in tmp directory and then load this instrument with AKSampler.
The only thing worth noting is that by default AKAUPresetBuilder will generate samples with trigger mode set to trigger, which will ignore note-off events. So you should set it explicitly.
For example:
let sampleC4 = AKAUPresetBuilder.generateDictionary(
rootNote: 60,
filename: pathToC4WavSample,
startNote: 48,
endNote: 65)
sampleC4["triggerMode"] = "hold"
let sampleC5 = AKAUPresetBuilder.generateDictionary(
rootNote: 72,
filename: pathToC5WavSample,
startNote: 66,
endNote: 83)
sampleC5["triggerMode"] = "hold"
AKAUPresetBuilder.createAUPreset(
dict: [sampleC4, sampleC5],
path: pathToAUPresetFilename,
instrumentName: "My Instrument",
attack: 0,
release: 0.2)
and then create a sampler and start AudioKit:
sampler = AKSampler()
try sampler.loadInstrument(atPath: pathToAUPresetFilename)
AudioKit.output = sampler
AudioKit.start()
and then use this to start playing note:
sampler.play(noteNumber: MIDINoteNumber(63), velocity: MIDIVelocity(120), channel: 0)
and this to stop, respecting release parameter:
sampler.stop(noteNumber: MIDINoteNumber(63), channel: 0)
Probably the best way would be to embed your wav files into an EXS or Soundfont format, making use of tools in that realm to accomplish the ADSR for instance. Otherwise you'll kind of have to have an instrument for each sample.
I am in the process of developing a game for iOS 9+ using Sprite Kit and preferably using Swift libraries.
Currently I'm using a Singleton where I preload my audio files, each connected to a separate instance of AVAudioPlayer.
Here's a short code-snipped to get the idea:
import SpriteKit
import AudioToolbox
import AVFoundation
class AudioEngine {
static let sharedInstance = AudioEngine()
internal var sfxPing: AVAudioPlayer
private init() {
self.sfxPing = AVAudioPlayer()
if let path = NSBundle.mainBundle().pathForResource("ping", ofType: "m4a") {
do {
let url = NSURL(fileURLWithPath:path)
sfxPing = try AVAudioPlayer(contentsOfURL: url)
sfxPing.prepareToPlay()
} catch {
print("ERROR: Can't load ping.m4a audio file.")
}
}
}
}
This Singleton is initialised during app start-up. In the game-loop I then just call the following line to play a specific audio file:
AudioEngine.sharedInstance.sfxPing.play()
This basically works, but I always get glitches when a file is played and the frame rate drops from 60.0 to 56.0 on my iPad Air.
Someone any idea how to fix this performance issue with AVAudioPlayer ?
I also watched out for 3rd party libraries, namely:
AudioKit [Looks very heavy-weighted]
ObjectAL [Last Update 2013 ...]
AVAudioEngine [Based on AVAudioPlayer, same problems ?]
Requirements:
Play a lot of very short samples (like shots, hits, etc..)
Play some motor effects (thus pitching would be nice)
Play some background / ambient sound in a loop
NO nasty glitches / frame rate drops !
Could you recommend any of the above mentioned libraries for my requirements or point out the problems using the above code ?
UPDATE:
Playing short sounds with:
self.runAction(SKAction.playSoundFileNamed("sfx.caf", waitForCompletion: false))
does indeed improve the frame rate. I exported the audio files with Audiacity to the .caf format (Apple's Core Audio Format). But in the tutorial, they export with "Signed 32-bit PCM" encoding which led to disturbed audio playback in my case. Using any of the other encoding options (32-bit float, U-Law, A-Law, etc..) worked fine for me.
Why using caf format? Because it's uncompressed and thus loaded faster into memory with less CPU overhead compared to compressed formats like m4a. For short sound effects played a lot in short intervals, this makes sense and disk usage is not affected much for short audio files consuming few kilobytes. For bigger audio files, like ambient and background music, using compressed formats (mp3, m4a) is obviously the better choice.
According to your question, if you develop a game for iOS 9+, you can use the new iOS 9 library SKAudioNode (official Apple doc):
var backgroundMusic: SKAudioNode!
For example you can add this to didMoveToView():
if let musicURL = NSBundle.mainBundle().URLForResource("music", withExtension: "m4a") {
backgroundMusic = SKAudioNode(URL: musicURL)
addChild(backgroundMusic)
}
You can also use to play a simple effect:
let beep = SKAudioNode(fileNamed: "beep.wav")
beep.autoplayLooped = false
self.addChild(beep)
Finally, if you want to change the volume:
beep.runAction(SKAction.changeVolumeTo(0.4, duration: 0))
Update:
I see you have update your question speaking about AVAudioPlayer and SKAction. I've tested both of them for my iOS8+ compatible games.
About AVAudioPlayer, I personally use a custom library maked by me based from the old SKTAudio.
I see your code, about AVAudioPlayer init, and my code is different because I use:
#available(iOS 7.0, *)
public init(contentsOfURL url: NSURL, fileTypeHint utiString: String?)
I don't know if fileTypeHint make the difference, so try and fill me about your test.
Advantages about your code:
With a shared instance audio manager based to AVAudioPlayer you can control volume, use your manager wherever you want, ensure compatibility with iOS8
Disadvantages about your code:
Everytime you play a sound and you want to play another sound, the previous is broken, especially if you have launch a background music.
How to solve? According with this SO post to work well without issues seems AVFoundation is limited to 4 AVAudioPlayer properties instantiables, so you can do this:
1) backgroundMusicPlayer: AVAudioPlayer!
2) soundEffectPlayer1: AVAudioPlayer!
3) soundEffectPlayer2: AVAudioPlayer!
4) soundEffectPlayer3: AVAudioPlayer!
You could build a method that switch through the 3 soundEffect to see if is occupied:
if player.playing
and use the next free player. With this workaround you have always your sound played correctly, even your background music.