receivedMIDIController not calling from sequencer - audiokit

Current State:
I'm playing a midi file using AppleSequencer.
VirtualPorts and Listener are implemented, "receivedMIDINoteOn/Off" working great.
My Problem
"receivedMIDIController" not calling from my sequencer.
Message send from Logic Pro X's MIDI Out Port is calling "receivedMIDIController". (Same MIDI File)
I want to know what is happening, anyone can help me please?

Ok, new track with "replaceMIDINoteData" have only "modiNoteDataEvent"
var sequencer:AppleSequencer = AppleSequencer()
let t = sequencer.newTrack("track")
t?.replaceMIDINoteData(with: seqSty.tracks[0].getMIDINoteData())
//sequencer has Note Evnet Only.
a sequencer from MIDI Data has all kind of events, and then replace data with "MIDINoteData"
sequencer = AppleSequencer(fromData: data)
sequencer.tracks[0].replaceMIDINoteData(with: seqSty.tracks[0].getMIDINoteData())
//sequencer has all events from MIDI

Related

SysEx event not received

I have a small app receiving MIDI from Bluetooth channel.
It works well using Core APIs, so I tried to use AudioKit to simplify my code.
Problem is : I can't see the SysEx events in the listener (although I see them in my basic code). Other midi events are received.
AKMidi = AudioKit.midi
AKMidi?.addListener(AVKMIDIControl())
...
AKMidi?.openInput(index: i)
...
class AVKMIDIControl:AKMIDIListener {
...
// copy paste from audiokit.io example
func receivedMIDISystemCommand(_ data: [MIDIByte]) {
if let command = AKMIDISystemCommand(rawValue: data[0]) {
var newString = "MIDI System Command: \(command) \n"
for i in 0 ..< data.count {
newString.append("\(data[i]) ")
}
print(newString)
}
}
...
I should be receiving SysEx events in the listener (btw,I implemented all other functions to be sure I am catching everything), but I only get log messages like
AKMIDI.swift:startReceivingSysex(with:):102:Starting to receive Sysex
AKMIDI.swift:stopReceivingSysex():107:Done receiving Sysex
but nothing through the listener...
How can I get the Sysex message data ?
I can confirm this behavior in the current version (4.7.2), however: in version 4.5.6 sysex (and other MIDI messages) are received by your midilistener. So my app is still using that version. for some reason receiving MIDI just seems broken in later versions.
But version 4.5.6 is not perfect too. It cuts large sysex messages into multiple small messages. I have some code to deal with that, let me know if you need it.

Aftertouch / Pressure Midi command not working in AVFoundation

I am using AVAudioUnitSampler to play some midi sounds, i have a soundfont loaded and have sucessfully use start note, stop note and apply pitch bend midi commands. I am now trying to incorporate aftertouch or pressure commands as it is called in AVFoundation.
So my code looks roughly like this (simplified):
self.midiAudioUnitSampler.startNote(60, withVelocity: 60, onChannel: 0)
//some time later...
self.midiAudioUnitSampler.sendPressure(20, onChannel: 0)
The note is humming away but the send pressure commands seem to have no effect on the sound output. I have tried using send pressure and sendPressureForKey to and no luck.
What am i doing wrong or am I misunderstanding what sendPressure does? I expect it to change the volume of the note after it is played.
Btw i have a setup where the note is being played and i have a separate Control to fire pressureCommands into the samplee at some time after the note playback has been started.
My guess is that the sampler does not know what to do with aftertouch messages. If you want to change the volume of the note (and any other notes playing) you could send your value to parameter 7 (volume) instead:
self.midiAudioUnitSampler.sendController(7, withValue: 20, onChannel: 0)
From my experience I have the feeling that the sampler does responds to MIDI controller 7.

iOS: Playing PCM buffers from a stream

I'm receiving a series of UDP packets from a socket containing encoded PCM buffers. After decoding them, I'm left with an int16 * audio buffer, which I'd like to immediately play back.
The intended logic goes something like this:
init(){
initTrack(track, output, channels, sample_rate, ...);
}
onReceiveBufferFromSocket(NSData data){
//Decode the buffer
int16 * buf = handle_data(data);
//Play data
write_to_track(track, buf, length_of_buf, etc);
}
I'm not sure about everything that has to do with playing back the buffers though. On Android, I'm able to achieve this by creating an AudioTrack object, setting it up by specifying a sample rate, a format, channels, etc... and then just calling the "write" method with the buffer (like I wish I could in my pseudo-code above) but on iOS I'm coming up short.
I tried using the Audio File Stream Services, but I'm guessing I'm doing something wrong since no sound ever comes out and I feel like those functions by themselves don't actually do any playback. I also attempted to understand the Audio Queue Services (which I think might be close to what I want), however I was unable to find any simple code samples for its usage.
Any help would be greatly appreciated, specially in the form of example code.
You need to use some type of buffer to hold your incoming UDP data. This is an easy and good circular buffer that I have used.
Then to play back data from the buffer, you can use Audio Unit framework. Here is a good example project.
Note: The first link also shows you how to playback using Audio Unit.
You could use audioQueue services as well, make sure your doing some kind of packet re-ordering, if your using ffmpeg to decode the streams there is an option for this.
otherwise audio queues are easy to set up.
https://github.com/mooncatventures-group/iFrameExtractor/blob/master/Classes/AudioController.m
You could also use AudioUnits, a bit more complicated though.

How to bind the HTML5::stalled event from soundmanager?

I'm trying to to write a javascript app that use the [SoundManager 2][1] api and aim to run in
all desktop and mobile browsers. On the iPad platform, Soundmanager is using the HTML5 audio api since there is on flash support. Now, when I'm trying to play two audio files back to back, both loaded in response to a click event, a [HTML5::stalled][2] event is occasionally raised. How do I set an event handler to catch the stalled event?
Since sound objects in my app are created on the fly and I don't know how to access directly to tags that are created by SoundManager, I tried to use a delegate to handle the stalled event:
document.delegate('audio', 'stalled', function (event) {...});
It doesn't work. the event did not raised in respond to stalled. (I had an alert in my handler).
Also tried to use [Sound::onsuspend()][3] to listen for stalled, but onsuspend pops out
on the end of sound::play(). How can we distinguish between stalled and other events that may raise the audio::suspend? Is there any other way to access the tags that SoundManager must create in order to play HTML audio?
I solved it with the following solution. This is not documented and found by reverse engineering.
It is all about accessing the html audio object, which is availalbe under _a.
currentSound = soundManager.createSound({..});
currentSound._a.addEventListener('stalled', function() {
if (!self.currentSound) return;
var audio = this;
audio.load();
audio.play();
});
The body of the method is based on this post about html5 stalled callback in safari
I can suggest a different "fix" I use with an html5 using platform (samsung smart TV):
var mySound = soundManager.createSound({..});
mySound.load();
setTimeout(function() {
if (mySound.readyState == 1) {
// this object is probably stalled
}
}, 1500);
This works since in html5, unlike flash, the 'readystate' property jumps from '0' to '3' almost instantanously, skipping '1'. ('cause if the track started buffering it's playable...).
Hope this works for you as well.

receive microphone sound without hearing it

This captures microphone sound and changes the alpha of 'foo' according to the sound level. However, I hear the microphones input. I want the visuals to work without hearing any sound. How would I do that?
m = Microphone.get();
_root.attachAudio(m);
m.setVolume(0); //i can still hear sound. does not mute mic.
onEnterFrame = function () {
foo._alpha = m.activityLevel+33;
};
EDIT: ANSWER / SOLUTION
series8217's trick with setLoopBack did not work, but that led me to the answer online:
m = Microphone.get();
var myAudio:Sound=new Sound(attachAudio(m));
myAudio.setVolume(0);
thanks for your time
EDIT: OTHER SOLUTION
my trick may interfere with sound. using this, mutes the mic but flash still receives input:
m = Microphone.get();
m.setSilenceLevel(100);
Switching the loopback mode on the microphone object should do the trick.
m.setLoopBack(false);
However, if that doesn't do it, perhaps your OS sound settings have monitor or loopback mode turned on. I'd say look into that before trying setLoopback().

Resources