I am trying to play a short musical note sequence with a default sine wave as sound inside a Swift Playground. At a later point I'd like to replace the sound with a Soundfont but at the moment I'd be happy with just producing some sound.
I want this to be a midi like sequence with direct control over the notes, not something purely audio based. The AudioToolbox seems to provide what I am looking for but I have troubles fully understanding its usage. Here's what I am currently trying
import AudioToolbox
// Creating the sequence
var sequence:MusicSequence = nil
var musicSequence = NewMusicSequence(&sequence)
// Creating a track
var track:MusicTrack = nil
var musicTrack = MusicSequenceNewTrack(sequence, &track)
// Adding notes
var time = MusicTimeStamp(1.0)
for index:UInt8 in 60...72 {
var note = MIDINoteMessage(channel: 0,
note: index,
velocity: 64,
releaseVelocity: 0,
duration: 1.0 )
musicTrack = MusicTrackNewMIDINoteEvent(track, time, ¬e)
time += 1
}
// Creating a player
var musicPlayer:MusicPlayer = nil
var player = NewMusicPlayer(&musicPlayer)
player = MusicPlayerSetSequence(musicPlayer, sequence)
player = MusicPlayerStart(musicPlayer)
As you can imagine, there's no sound playing. I appreciate any ideas on how to have that sound sequence playing aloud.
You have to enable the asynchronous mode for the Playground.
Add this at the top (Xcode 7, Swift 2):
import XCPlayground
XCPlaygroundPage.currentPage.needsIndefiniteExecution = true
and your sequence will play.
The same for Xcode 8 (Swift 3):
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
Working MIDI example in a Swift Playground
import PlaygroundSupport
import AudioToolbox
var sequence : MusicSequence? = nil
var musicSequence = NewMusicSequence(&sequence)
var track : MusicTrack? = nil
var musicTrack = MusicSequenceNewTrack(sequence!, &track)
// Adding notes
var time = MusicTimeStamp(1.0)
for index:UInt8 in 60...72 { // C4 to C5
var note = MIDINoteMessage(channel: 0,
note: index,
velocity: 64,
releaseVelocity: 0,
duration: 1.0 )
musicTrack = MusicTrackNewMIDINoteEvent(track!, time, ¬e)
time += 1
}
// Creating a player
var musicPlayer : MusicPlayer? = nil
var player = NewMusicPlayer(&musicPlayer)
player = MusicPlayerSetSequence(musicPlayer!, sequence)
player = MusicPlayerStart(musicPlayer!)
PlaygroundPage.current.needsIndefiniteExecution = true
Great MIDI reference page with a nice chart
Related
EDIT #2: OK, I missed something big here, but I still have a problem. The reason the sound is soft and I have to amplify it is that it is coming from the earpiece, not the speaker. When I add the option .defaultToSpeaker to the setCategory I get no sound at all.
So, this is the real problem, when I set the category to .playbackAndRecord and the option to .defaultToSpeaker, why do I get no sound at all on a real phone? In addition to no sound, I did not receive input from the mic either. The sound is fine in the simulator.
EDIT #3: I began observing route changes and my code reports the following when the .defaultToSpeaker option is included.
2020-12-26 12:17:56.212366-0700 SST[13807:3950195] Current route:
2020-12-26 12:17:56.213275-0700 SST[13807:3950195] <AVAudioSessionRouteDescription: 0x2816af8e0,
inputs = (
"<AVAudioSessionPortDescription: 0x2816af900, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>"
);
outputs = (
"<AVAudioSessionPortDescription: 0x2816af990, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
)>
The output is set to Speaker. Is it significant that the selectedDataSource is (null)? Before the .defaultToSpeaker option was added this reported output set to Receiver, also with selectedDataSource = (null), so I would guess not.
EDIT: I added the code to set the Audio Session category. The new code is shown below. So far it seems to have no effect. If I leave it in or comment it out, I don't see any difference. I also have code (that I deleted here for simplicity) that modifies the microphone pattern. That too had no discernible effect. Perhaps though, that is to be expected?
In addition to the symptoms below, if I use Settings/Bluetooth to select the AirPods, then I got no output from the App, even after I remove the AirPods.
What am I missing here?
/EDIT
After getting this to work well on the simulator, I moved to debugging on my 11 Pro Max. When playing notes on the MandolinString, the sound from the (11 Pro Max or an 8) simulator is loud and clear. On the real phone, the sound is barely audible and from the speaker only. It does not go to the attached audio speaker, be that a HomePod or AirPods. Is this a v5 bug? Do I need to do something with the output?
A second less important issue is that when I instantiate this object the MandolinString triggers without me calling anything. The extra fader and the reset of the gain from 0 to 1 after a delay suppresses this sound.
private let engine = AudioEngine()
private let mic : AudioEngine.InputNode
private let micAmp : Fader
private let mixer1 : Mixer
private let mixer2 : Mixer
private let silence : Fader
private let stringAmp : Fader
private var pitchTap : PitchTap
private var tockAmp : Fader
private var metro = Timer()
private let sampler = MIDISampler(name: "click")
private let startTime = NSDate.timeIntervalSinceReferenceDate
private var ampThreshold: AUValue = 0.12
private var ampJumpSize: AUValue = 0.05
private var samplePause = 0
private var trackingNotStarted = true
private var tracking = false
private var ampPrev: AUValue = 0.0
private var freqArray: [AUValue] = []
init() {
// Set up mic input and pitchtap
mic = engine.input!
micAmp = Fader(mic, gain: 1.0)
mixer1 = Mixer(micAmp)
silence = Fader(mixer1, gain: 0)
mixer2 = Mixer(silence)
pitchTap = PitchTap(mixer1, handler: {_ , _ in })
// All sound is fed into mixer2
// Mic input is faded to zero
// Now add String sound to Mixer2 with a Fader
pluckedString = MandolinString()
stringAmp = Fader(pluckedString, gain: 4.0)
mixer2.addInput(stringAmp)
// Create a sound for the metronome (tock), add as input to mixer2
try! sampler.loadWav("Click")
tockAmp = Fader(sampler, gain: 1.0)
mixer2.addInput(tockAmp)
engine.output = mixer2
self.pitchTap = PitchTap(micAmp,
handler:
{ freq, amp in
if (self.samplePause <= 0 && self.tracking) {
self.samplePause = 0
self.sample(freq: freq[0], amp: amp[0])
}
})
do {
//try audioSession.setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.measurement)
try audioSession.setCategory(AVAudioSession.Category.playAndRecord)
//, options: AVAudioSession.CategoryOptions.defaultToSpeaker)
try audioSession.setActive(true)
} catch let error as NSError {
print("Unable to create AudioSession: \(error.localizedDescription)")
}
do {
try engine.start()
akStartSucceeded = true
} catch {
akStartSucceeded = false
}
} // init
XCode 12, iOS 14, SPM. Everything up to date
Most likely this is not an AudioKit issue per se, it has to do with AVAudioSession, you probably need to set it on the device to be DefaultToSpeaker. AudioKit 5 has less automatic session management compared to version 4, opting to make fewer assumptions and let the developer have control.
The answer was indeed to add code for AVAudioSession. However, it did not work where I first put it. It only worked for me when I put it in the App delegate didFInishLauchWithOptions. I found this in the AudioKit Cookbook. This works:
class AppDelegate: UIResponder, UIApplicationDelegate {
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
// Override point for customization after application launch.
#if os(iOS)
self.audioSetup()
#endif
return true
}
#if os(iOS)
func audioSetup() {
let session = AVAudioSession.sharedInstance()
do {
Settings.bufferLength = .short
try session.setPreferredIOBufferDuration(Settings.bufferLength.duration)
try session.setCategory(.playAndRecord,
options: [.defaultToSpeaker, .mixWithOthers])
try session.setActive(true)
} catch let err {
print(err)
}
// Other AudioSession stuff here
do {
try session.setActive(true)
} catch let err {
print(err)
}
}
#endif
}
AudioKit include a great tool to track signal amplitude: AKAmplitudeTracker
This tracker can be init with a thresholdCallback, I suppose that the callback should trigger when the threshold is reach.
I'm playing with the MicrophoneAnalysis example and I can't find a way to trigger my callback.
Here is my code:
var mic: AKMicrophone!
var trackerAmplitude: AKAmplitudeTracker!
var silence: AKBooster!
AKSettings.audioInputEnabled = true
mic = AKMicrophone()
trackerAmplitude = AKAmplitudeTracker(mic, halfPowerPoint: 10, threshold: 0.01, thresholdCallback: { (success) in
print("thresholdCallback: \(success)")
})
trackerAmplitude.start()
silence = AKBooster(trackerAmplitude, gain: 0)
AudioKit.output = silence
I tried to play with the halfPowerPoint and threshold values, but even with vey low values I cannot find a way to print anything :/
Whereas when I'm printing trackerAmplitude.amplitude, I've got values higher than 0.01
Is there something I'm missing ?
The following code works. Tested with AudioKit 4.9, Xcode 11.2, macOS Playground.
This might be an issue of AudioKit, but threshold must be changed via property to activate tracking, as shown below...
import AudioKitPlaygrounds
import AudioKit
let mic = AKMicrophone()
AKSettings.audioInputEnabled = true
let amplitudeTracker = AKAmplitudeTracker(mic, halfPowerPoint: 10, threshold: 1, thresholdCallback: { (success) in
print("thresholdCallback: \(success)")
})
AudioKit.output = amplitudeTracker
try AudioKit.start()
amplitudeTracker.threshold = 0.01 // !! MUST BE SET VIA PROPERTY
amplitudeTracker.start()
mic?.start()
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
I can't seem to get the AudioKit instruments to behave the way I'd like: I want to be able to change the frequency continuously and also have the instruments play for an infinite amount of time, just like the oscillators. However, I can't even get a simple playground like the following to output any sound:
//: ## Flute
//: Physical model of a Flute
import AudioKitPlaygrounds
import AudioKit
let playRate = 2.0
let flute = AKFlute()
let reverb = AKReverb(flute)
var triggered = false
let performance = AKPeriodicFunction(frequency: playRate) {
if !triggered {
flute.frequency = 240.0
flute.amplitude = 0.6
flute.play()
triggered = true
}
}
AudioKit.output = reverb
try AudioKit.start(withPeriodicFunctions: performance)
performance.start()
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
The behavior I want is the ability to set the frequency at any time and have the note ring-out forever. Is this possible?
Change flute.play() to flute.trigger()
I'm trying to use AKSampler on a simple iOS project to load a file and play it when tapping the device's screen.
I did the same steps with AKSamplePlayer and it worked fine, But I rather use the AKSampler, and also I get a strong feeling of missing something.
I've tried the play() method, and also the one with the midi note.
Which one is right? Do they both work?Besides, AudioKit looks so promising.
Here is my code:
import UIKit
import AudioKit
class ViewController: UIViewController
{
var sampler = AKSampler()
var tapRecognizer = UITapGestureRecognizer()
override func viewDidLoad()
{
super.viewDidLoad()
do
{
let file = try AKAudioFile(readFileName: "AH_G2.wav")
try sampler.loadAudioFile(file)
}
catch
{
print("No Such File...")
}
view.addGestureRecognizer(tapRecognizer)
view.isUserInteractionEnabled = true
tapRecognizer.addTarget(self, action: #selector(viewTapped))
AudioKit.output = sampler
AudioKit.start()
}
#objc private func viewTapped()
{
sampler.play(noteNumber: 60, velocity: 80, channel: 0)
print("tapped...")
}
}
Edit:
My problem is actually with the loadAudioFile method, the AKAudioFile itself is good, and the AKSampler plays a default sine sound.
I tried also the AKAudioFile methods for creating player and sampler didn't.
let file = try AKAudioFile (readFileName: "AH_G2.wav")
player = file.player
sampler = file.sampler
I also tried to add the wav file using the menu, no change.
If you look at the implementation, there is just the one play() method, but it has default values for noteNumber, velocity, and channel:
#objc open func play(noteNumber: MIDINoteNumber = 60,
velocity: MIDIVelocity = 127,
channel: MIDIChannel = 0) {
samplerUnit.startNote(noteNumber, withVelocity: velocity, onChannel: channel)
}
Changing the MIDI note will change the pitch/speed of the sample playback (60 is standard, 72 is double speed, 48 would be half speed etc), and changing the velocity will change the volume.
NB: the title of your post is 'AKSampler doesn't play', but I ran your code (changing the sample, of course) and it played just fine on my iPad.
I've tried a different audio file and it worked fine.
The first file was a mono file, so my conclusion here is that the AKSampler does not support mono files. Would love to hear more on that.
I want to build an Apple TV app that plays a list of short videos and plays music over them.
To achieve this I need to do two following things:
1) Mute the videos or remove the audio tracks from them
I have no Idea if/how this is possible. I looked around the TVJS documentation for Player and MediaItem but found nothing.
2) Play two media items at the same time.
From this I already know that this is at least not possible with two players. I also tried to use the background audio of my TVML Template but this didn't work either.
Does anyone know of a way how something like this would be possible?
Edit (some more information):
For testing stuff I used the code from this article
At the suggesion of Daniel Storm I tried to change the load function in Presenter.js to both
load: function(event) {
var self = this,
ele = event.target,
videoURL = ele.getAttribute("videoURL")
if(videoURL) {
var player = new Player();
var playlist = new Playlist();
var mediaItem = new MediaItem("video", videoURL);
player.playlist = playlist;
player.playlist.push(mediaItem);
mediaItem.volume = 0.0;
player.present();
}
},
and
load: function(event) {
var self = this,
ele = event.target,
videoURL = ele.getAttribute("videoURL")
if(videoURL) {
var player = new Player();
var playlist = new Playlist();
var mediaItem = new MediaItem("video", videoURL);
player.playlist = playlist;
player.playlist.push(mediaItem);
player.volume = 0.0;
player.present();
}
},
but neither worked.