Play midi with Core Midi on device - ios

I 've been using CoreMidi to connect to USB devices and/or WiFi hosts. It works fine and sends my midi events.
I want to send them to the device itself to be played. Like the MusicPlayer, but I don't want to send midi files, just my own midi events.
What should I do? I tried connecting to the first destination available (MIDIGetNumberOfDestinations) but it didn't work.

A corrected answer now that I understand better the question.
A sample of one of my projects.
// Setup MIDI input port
MIDIClientRef client = NULL;
MIDIPortRef inport = NULL;
CheckError (MIDIClientCreate(CFSTR("MyApplication"),
NULL,
NULL,
&client),
"Couldn't create MIDI client");
CheckError (MIDIInputPortCreate(client,
CFSTR("MyApplication Input port"),
&inport),
"Couldn't create input port");
[self setInputPort:inport];
[self setMidiClient:client];
[self setDestinationEndpoint:[[self midiSession] destinationEndpoint]];

Related

Switch to default audio input device when it's connected in js

I'm using twilio sdk to implement web calling app. Let's say I'm making a call to someone with my laptop devices (mic and speakers). During the call I plugged-in my headset. In the system both audio input and output devices are changed. The call audio output signal is transferred fine (I can here my counterpart through my headphones). But the audio input device stays the same - app doesn't start to use the mic on my headset.
It there any way to update audio input track to switch to a headset once it's connected?
First of all, get local praticipant tracks and filter audio track.
const publications = Array.from(this.participant.tracks.values());
const audioPublication = publications.find(item => item.kind === 'audio');
Then set the deviceId like this.
const constraints = { deviceId: { exact: deviceId } };
audioPublication.track.restart(constraints);

iOS Audio Units - Connecting with Graphs?

I've jumped off the deep end, and have decided to figure out low-latency audio on iOS using Audio Units. I've read as much documentation (from Apple and forums galore) as I can find, and the overall concepts make sense, but I'm still scratching my head on some concepts that I need help with:
I saw somewhere that AU Graphs are deprecated and that I should instead connect Audio Units directly. I'm cool with that... but how? Do I just need to use the Connection property of an Audio Unit to connect it to a source AU, and off I go? Initialize and Start the Units, and watch the magic happen? (cause it doesn't for me...)
What's the best Audio Unit setup to use if I simply want to grab audio from my mic, do some processing to the audio data, and then store that audio data without sending it out to the RemoteIO speaker, bus 0 output? I tried hooking up a GenericOutput AudioUnit to catch the data in a callback without any luck...
That's it. I can provide code when requested, but it's way too late, and this has wiped me out. If there's know easy answer, that's cool. I'll send any code snippets at will. Suffice it to say, I can easily get a simple RemoteIO, mic in, speaker out setup working great. Latency seems non-existant (at least to my ears). I just want to do something with the mic data and store it in memory without it going out to the speaker. Eventually hooking in the eq and mixer would be hip, but one step at a time.
FWIW, I'm coding in Xamarin Forms/C# land, but code examples in Objective C, Swift or whatever is fine. I'm stuck on the concepts, not necessarily the exact code.
THANKS!
Working with audio units without a graph is pretty simple and very flexible. To connect two units, you call AudioUnitSetProperty this way :
AudioUnitConnection connection;
connection.sourceAudioUnit = sourceUnit;
connection.sourceOutputNumber = sourceOutputIndex;
connection.destInputNumber = destinationInputIndex;
AudioUnitSetProperty(
destinationUnit,
kAudioUnitProperty_MakeConnection,
kAudioUnitScope_Input,
destinationInputIndex,
&connection,
sizeof(connection)
);
Note that it is required for the units connected this way to have their Stream Format set uniformly and that it must be done before their initialization.
Your question mentions Audio Units, and Graphs. As said in the comments, the graph concept has been replaced with the idea of attaching "nodes" to an AVAudioEngine. These nodes then "connect" to other nodes. Connecting nodes creates signal paths and starting the engine makes it all happen. This may be obvious, but I am trying to respond generally here.
You can do this all in Swift or in Objective-C.
Two high level perspectives to consider with iOS audio are the idea of a "host" and that of a "plugin". The host is an app and it hosts plugins. The plugin is usually created as an "app extension" and you can look up audio unit extensions for more about that as needed. You said you have one doing what you want, so this is all explaining the code used in a host
Attach AudioUnit to an AVaudioEngine
var components = [AVAudioUnitComponent]()
let description =
AudioComponentDescription(
componentType: 0,
componentSubType: 0,
componentManufacturer: 0,
componentFlags: 0,
componentFlagsMask: 0
)
components = AVAudioUnitComponentManager.shared().components(matching: description)
.compactMap({ au -> AVAudioUnitComponent? in
if AudioUnitTypes.codeInTypes(
au.audioComponentDescription.componentType,
AudioUnitTypes.instrumentAudioUnitTypes,
AudioUnitTypes.fxAudioUnitTypes,
AudioUnitTypes.midiAudioUnitTypes
) && !AudioUnitTypes.isApplePlugin(au.manufacturerName) {
return au
}
return nil
})
guard let component = components.first else { fatalError("bugs") }
let description = component.audioComponentDescription
AVAudioUnit.instantiate(with: description) { (audioUnit: AVAudioUnit?, error: Error?) in
if let e = error {
return print("\(e)")
}
// save and connect
guard let audioUnit = audioUnit else {
print("Audio Unit was Nil")
return
}
let hardwareFormat = self.engine.outputNode.outputFormat(forBus: 0)
self.engine.attach(au)
self.engine.connect(au, to: self.engine.mainMixerNode, format: hardwareFormat)
}
Once you have your AudioUnit loaded, you can connect your Athe AVAudioNodeTapBlock below, it has more to it since it need to be a binary or something that other host apps that aren't yours can load.
Recording an AVAudioInputNode
(You can replace the audio unit with the input node.)
In an app, you can record audio by creating an AVAudioInputNode or just reference the 'inputNode' property of the AVAudioEngine, which is going to be connected to the system's selected input device(mic, line in, etc) by default
Once you have the input node you want to process the audio of, next "install a tap" on the node. You can also connect your input node to a mixer node and install a tap there.
https://developer.apple.com/documentation/avfoundation/avaudionode/1387122-installtap
func installTap(onBus bus: AVAudioNodeBus,
bufferSize: AVAudioFrameCount,
format: AVAudioFormat?,
block tapBlock: #escaping AVAudioNodeTapBlock)
The installed tap will basically split your audio stream into two signal paths. It will keep sending the audio to the AvaudioEngine's output device and also send the audio to a function that you define. This function(AVAudioNodeTapBlock) is passed to 'installTap' from AVAudioNode. The AVFoundation subsystem calls the AVAudioNodeTapBlock and passes you the input data one buffer at a time along with the time at which the data arrived.
https://developer.apple.com/documentation/avfoundation/avaudionodetapblock
typealias AVAudioNodeTapBlock = (AVAudioPCMBuffer, AVAudioTime) -> Void
Now the system is sending the audio data to a programmable context, and you can do what you want with it.
To use it elsewhere, you can create a separate AVAudioPCMBuffer and write each of the passed in buffers to it in the AVAudioNodeTapBlock.

Writing buffers of Streamed mp3 packets to wav file using ExtAudioFileWrite ios

I am working on online radio app I managed to play the streamed mp3 packets from the Icecast server using AudioQueueServices, what I am struggling with is implementing a recording feature.
Since the streaming is in mp3 format I can not write the Audio packets directly to file using AudioFileWritePackets.
To leverage The automatic conversion of Extended Audio I am using ExtAudioWriteFile to write to a wav file. I have setup the AudioStreamBasicDescription of the incoming packets using the FileStreamOpen call back function AudioFileStream_PropertyListenerProc and the destination format I populated manually.The code successfully creates the file and writes the packet to it but on playback what I hear is a white noise;
Here is my code
// when the recording button is pressed this function creates the file and setup the asbd
-(void)startRecording{
recording = true;
OSStatus status;
NSURL *baseUrl=[self applicationDocumentsDirectory];//returns the document direcotry of the app
NSURL *audioUrl = [NSURL URLWithString:#"Recorded.wav" relativeToURL:baseUrl];
//asbd setup for the destination file/wav file
AudioStreamBasicDescription dstFormat;
dstFormat.mSampleRate=44100.0;
dstFormat.mFormatID=kAudioFormatLinearPCM; dstFormat.mFormatFlags=kAudioFormatFlagsNativeEndian|kAudioFormatFlagIsSignedInteger|kAudioFormatFlagIsPacked;
dstFormat.mBytesPerPacket=4;
dstFormat.mBytesPerFrame=4;
dstFormat.mFramesPerPacket=1;
dstFormat.mChannelsPerFrame=2;
dstFormat.mBitsPerChannel=16;
dstFormat.mReserved=0;
//creating the file
status = ExtAudioFileCreateWithURL(CFBridgingRetain(audioUrl), kAudioFileWAVEType, &(dstFormat), NULL, kAudioFileFlags_EraseFile, &recordingFilRef);
// tell the EXtAudio File ApI what format we will be sending samples
//recordasbd is the asbd of incoming packets populated in AudioFileStream_PropertyListenerProc
status = ExtAudioFileSetProperty(recordingFilRef, kExtAudioFileProperty_ClientDataFormat, sizeof(recordasbd), &recordasbd);
}
// a handler called by packetproc call back function in AudiofileStreamOpen
- (void)handlePacketsProc:(const void *)inInputData numberBytes:(UInt32)inNumberBytes numberPackets:(UInt32)inNumberPackets packetDescriptions:(AudioStreamPacketDescription *)inPacketDescriptions {
if(recording){
// wrap the destination buffer in an audiobuffer list
convertedData.mNumberBuffers= 1;
convertedData.mBuffers[0].mNumberChannels = recordasbd.mChannelsPerFrame;
convertedData.mBuffers[0].mDataByteSize = inNumberBytes;
convertedData.mBuffers[0].mData = inInputData;
ExtAudioFileWrite(recordingFilRef,recordasbd.mFramesPerPacket * inNumberPackets, &convertedData);
}
}
My questions are:
Is my approach right can I write mp3 packets to wav file this way If so what am I missing ??
If my approach is wrong please tell me any other way you think is right.A nudge in the right direction is more than enough for me
I am so grateful for any help I have read every SO question I could get my hands on this topic, I also looked closely at apples Convertfile example but I could not figure out what I am missng
Thanks in advance for any help
Why not write the raw mp3 packets directly to a file? Without using ExtAudioFile at all.
They will form a valid mp3 file and will be much smaller than the equivalent wav file.

How to resolve midi note sound issue?

I am trying to play MIDI notes by loading sound font file for Piano,but I have an issue that its not producing proper note sound its regardless of octave notes.I am using MIKMIDI Library.Also I debugged and noticed that every note number is correct but not the sound.I am using following code which sends MIDI messages to an AudioUnit.Please can anyone help to get it solved.
- (void)handleMIDIMessages:(NSArray *)commands
{
for (MIKMIDICommand *command in commands)
{
OSStatus err = MusicDeviceMIDIEvent(self.instrumentUnit, command.statusByte, command.dataByte1, command.dataByte2,0); NSLog(#"%#",command); if (err) NSLog(#"Unable to send MIDI command to synthesizer %#: %#", command, #(err));
}
}

Detect attached audio devices iOS

I'm trying to figure out how to detect which if any audio devices are connected on iphone/ipad/ipod. I know all about the audio route calls and route change callbacks but these don't tell me anything about what's attached. They only report where the audio is currently routing. I need to know, for instance, if headphones and/or bluetooth are still attached while audio is routed through the speakers. Or, for instance, if a user plugs in the headset while using bluetooth then decides to disconnect bluetooth, I need to know that the bluetooth is disconnected even as audio is still routing through headphones.
Unfortunately, as of iOS11, it seems there's no API to reliably get the list of the output devices that are currently attached - as soon as the current route changes, you only see 1 device (currently routed) via AVAudioSession's currentRoute.outputs, even though multiple devices may still be attached.
However, for the input, and that includes Bluetooth devices with HFP profile, if the proper Audio Session mode is used (AVAudioSessionModeVoiceChat or AVAudioSessionModeVideoChat for example), one can get the list of the available input via AVAudioSession's availableInputs, and those inputs are listed there even when that device is not an active route - this is very useful when a user is doing a manual override via MPVolumeView from Bluetooth to the speaker, for example, and since HFP is a 2-way IO (has both input and output), you can judge whether output HFP Bluetooth is still available by looking at the inputs.
BOOL isBtInputAvailable = NO;
NSArray *inputs = [[AVAudioSession sharedInstance] availableInputs];
for (AVAudioSessionPortDescription* port in inputs) {
if ([port.portType isEqualToString:AVAudioSessionPortBluetoothHFP]) {
isBtInputAvailable = YES;
break;
}
}
In case of iOS 5 you should use:
CFStringRef newRoute;
size = sizeof(CFStringRef);
XThrowIfError(AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &size, &newRoute), "couldn't get new audio route");
if (newRoute)
{
CFShow(newRoute);
if (CFStringCompare(newRoute, CFSTR("HeadsetInOut"), NULL) == kCFCompareEqualTo) // headset plugged in
{
colorLevels[0] = .3;
colorLevels[5] = .5;
}
else if (CFStringCompare(newRoute, CFSTR("SpeakerAndMicrophone"), NULL) == kCFCompareEqualTo)
}
You can get from AudioSession properties a list of InputSources and OutputDestinations.
Check out these Session Properties:
kAudioSessionProperty_InputSources
kAudioSessionProperty_OutputDestinations
And to query the details of each, you can use:
kAudioSessionProperty_InputSource
kAudioSessionProperty_OutputDestination

Resources