AudioToolbox MusicPlayer change program has no effect - ios

MIDI noob in training here...
I have been using MusicPlayer/MusicSequence/MusicTrack to play MIDI notes on devices running iOS. The notes are playing fine. I am struggling to change the instrument being played. As far as I can figure this is how to do it:
-(void) setInstrument:(MIDIInstruments) program channel:(int) channel MusicTrack:(MusicTrack*) track time:(float) time {
if(channel < 0 || channel > 15 || program >=MIDI_INSTRUMENT_COUNT || time < 0) {
return;
}
MIDIChannelMessage programChange = { ((UInt8)0xC) << 4 | ((UInt8)channel), ((UInt8)program), 0, 0};
OSStatus result = MusicTrackNewMIDIChannelEvent(*track, time, &programChange);
if(result != noErr) {
[NSException raise:#"Set Instrument" format:#"Failed to set instrument error: %#", [NSError errorWithDomain:NSOSStatusErrorDomain code:result userInfo:nil]];
}
}
In this case channel is 0 or 1, I tried several instruments through out the range of valid instrument enumerations, the time is 0.0, and the MusicTrack is valid, and has ~30 seconds of note events. The call to set the channel event passes back noErr. I am stumped...Anyone?

I had read in other posts that I would be able to generate midi using Music Player and friends. It provides for program changes. So, I had figured it was supported. After exhausting all theories, I turned to AUGraph. I added a *.sf2 file that I found online, instantiated the AUGraph, two AudioUnits, a MidiEndpointRef, and a MidiClientRef; according to this tutorial.
It was in the endpoint callback that I had to turn notes on and off using MusicDeviceMIDIEvent on the samplerUnit that seemed to allow for the program change. Whereas before I was just loading note events into a MusicTrack and playing/stoping the MusicPlayer.

Related

no sound when using pjsip

I've got a problem with pjsip. I'm trying to make an outgoing call with pjsua_call_make_call. It's working, but when I answer this call on a device, I can't hear any sound. However, I can see an icon on iPhone, indicating that a microphone is in use. Did anybody come across such issue?
I am having a similar issue. I place an outbound call and I can hear the audio on the device when I pick up the call, but can't hear any audio on the device using pjsip to make the call.
If your audio capture doesn't seem to be working make sure you have microphone permission, and you have to manually call pjsua_set_snd_dev(), when you connect. There's some other additional troubleshooting here https://trac.pjsip.org/repos/wiki/Getting-Started/iPhone#Commonproblems
I've no experience with iOS, but I suppose you should connect audio stream to some device in on_call_media_state callback (link). Look at minimal example from desktop app:
pjsua_call_info ci;
pjsua_call_get_info(call_id, &ci);
for (unsigned i = 0; i < ci.media_cnt; i++) {
if (ci.media[i].type == PJMEDIA_TYPE_AUDIO) {
if (ci.media[i].status == PJSUA_CALL_MEDIA_ACTIVE) {
pjsua_conf_connect(ci.conf_slot, 0);
pjsua_conf_connect(0, ci.conf_slot);
}
}
}
Edit:
iOS code for default audio stream in call:
var callinfo: pjsua_call_info = pjsua_call_info()
pjsua_call_get_info(call_id, &callinfo)
if(callinfo.media_status == PJSUA_CALL_MEDIA_ACTIVE) {
pjsua_conf_connect(callinfo.conf_slot, 0)
pjsua_conf_connect(0, callinfo.conf_slot)
}

Timestamping in AUv3 MIDI plug-in for iOS

I am trying to understand how timestamping works for an AUv3 MIDI plug-in of type "aumi", where the plug-in sends MIDI events to a host. I cache the MIDIOutputEventBlockand the transportStateBlock properties into _outputEventBlock and _transportStateBlock in the allocateRenderResourcesAndReturnError method and use them in the internalRenderBlockmethod:
- (AUInternalRenderBlock)internalRenderBlock {
// Capture in locals to avoid Obj-C member lookups. If "self" is captured in render, we're doing it wrong. See sample code.
return ^AUAudioUnitStatus(AudioUnitRenderActionFlags *actionFlags, const AudioTimeStamp *timestamp, AVAudioFrameCount frameCount, NSInteger outputBusNumber, AudioBufferList *outputData, const AURenderEvent *realtimeEventListHead, AURenderPullInputBlock pullInputBlock) {
// Transport State
if (_transportStateBlock) {
AUHostTransportStateFlags transportStateFlags;
_transportStateBlock(&transportStateFlags, nil, nil, nil);
if (transportStateFlags & AUHostTransportStateMoving) {
if (!playedOnce) {
// NOTE On!
unsigned char dataOn[] = {0x90,69,96};
_outputEventBlock(timestamp->mSampleTime, 0, 3, dataOn);
playedOnce = YES;
// NOTE Off!
unsigned char dataOff[] = {0x80,69,0};
_outputEventBlock(timestamp->mSampleTime+96000, 0, 3, dataOff);
}
}
else {
playedOnce = NO;
}
}
return noErr;
};
}
What this code is meant to do is to play the A4 note in a synthesizer at the host for 2 seconds (the sampling rate is 48KHz). What I get is a click sound. Experimenting some, I have tried delaying the start of the note on MIDI event by offsetting the _outputEventBlock AUEventSampleTime, but it plays the click sound as soon as the play button is pressed on the host.
Now, if I change the code to generate the note off MIDI event when the _transportStateFlags are indicating the state is "not moving" instead, then the note plays as soon as the play button is pressed and stops when the pause button is pressed, which would be the correct behavior. This tells me that my understanding of the AUEventSampleTime property in MIDIOutputEventBlock is flawed and that it cannot be used to schedule MIDI events for the host by adding offsets to it.
I see that there is another property scheduleMIDIEventBlock, and tried using this property instead but when I use it, there isn't any sound played.
Any clarification of how this all works would be greatly appreciated.

Remote Audio not connecting: iOS, PJSIP 2.6, CallKit, PJSUA2

I am updating an existing iOS VOIP application to use CallKit with PJSIP 2.6 and PJSUA2.
After some effort, the CallKit implementation seems to be working as expected. Incoming calls can be accepted or declined, and if accepted, will be connected and controlled with an in-app active call view controller.
The audio, however, does not appear to be properly connected at the pjsip end. There is no audio coming in from, or going out to the remote caller. The microphone audio appears to be routed back to the iPhone speaker.
The SIP audio ports should be connecting in callback function onCallMediaState:
virtual void onCallMediaState(OnCallMediaStateParam &prm) {
CallInfo ci = getInfo();
AudioMedia* audio_media = 0;
for (unsigned i = 0; i < ci.media.size(); i++) {
if (ci.media[i].type==PJMEDIA_TYPE_AUDIO && ( ci.media[i].status == PJSUA_CALL_MEDIA_ACTIVE ||
ci.media[i].status ==PJSUA_CALL_MEDIA_REMOTE_HOLD)) {
try {
audio_media = static_cast<AudioMedia*>(getMedia(i));
if(audio_media != 0)
{
Endpoint::instance().audDevManager().getCaptureDevMedia().startTransmit(*audio_media);
audio_media->startTransmit(Endpoint::instance().audDevManager().getPlaybackDevMedia());
}
} catch (std::exception ex) {
continue;
}
}
}
}
As described in Ticket#1941 at:
https://trac.pjsip.org/repos/ticket/1941:
I set the audio devices using:
ep->audDevManager().setNullDev();
immediately after the initialization of the Endpoint class (ep->libInit(epConfig);), and then:
I attempt to set the devices using pjsua_set_snd_dev() in CXProvider’s didActivate function, like this:
-(void) setSipSoundDevices {
pj_status_t status;
int captDev, playDev;
pjsua_get_snd_dev(&captDev, &playDev);
Endpoint::instance().audDevManager().setPlaybackDev(playDev);
Endpoint::instance().audDevManager().setCaptureDev(captDev);
}
pjsua_get_snd_dev(&captDev, &playDev) returns -99, -99 and the audio does not connect.
My question is this. How can I properly hook up the remote audio sources or ports, on an incoming call using PJSIP 2.6 and CallKit?
Might 2.5.5 work better in this regard?
Any insights are appreciated.
By and by I got the incoming call audio working properly. The crux of the matter was that even though the documentation from both Apple and SIP say that the audio has to be handled on the iOS end, you still have to set the SIP audio devices in the SIP layer in the provider delegate 'didActivate' and 'didDeactivate' functions. Because I use the PJSUA C++ layer, I had to drill down through the objc-c++ bridging layer to provide this functionality. ie.
-(void) activateSipSoundDevices {
pj_status_t status = pjsua_set_snd_dev(0, 0);
}
-(void) deactivateSipSoundDevices {
pj_status_t status = pjsua_set_null_snd_dev();
}
When initializing the SIP Account, be sure to set the null sound devices like:
ep->audDevManager().setNullDev();
Hope this helps.

play multiple SpeakHere audio files

By recording multiple snippets using filenames, I have attempted to record multiple separate short voice snippets in SpeakHere, I want to play them serially, separated by a set fixed interval of time between the starts of each snippet. I want the series of snippets to play in a loop forever, or until the user stops play.
My question is how do I alter SpeakHere to do so?
(I say "attempted" because I have not been able yet to run SpeakHere on my Mac Mini iPhone simulator. That is the subject of another question and because another question on the subject of multiple files has not been answered, either.)
In SpeakHereController.mm is the following method definition for playing a recorded file. Notice the final else clause calls player->StartQueue(false)
- (IBAction)play:(id)sender
{
if (player->IsRunning())
{ [snip]
}
else
{
OSStatus result = player->StartQueue(false);
if (result == noErr)
[[NSNotificationCenter defaultCenter] postNotificationName:#"playbackQueueResumed" object:self];
}
}
Below is an excerpt from SpeakHere AQPlayer.mm
OSStatus AQPlayer::StartQueue(BOOL inResume)
{
// if we have a file but no queue, create one now
if ((mQueue == NULL) && (mFilePath != NULL)) CreateQueueForFile(mFilePath);
mIsDone = false;
// if we are not resuming, we also should restart the file read index
if (!inResume) {
mCurrentPacket = 0;
// prime the queue with some data before starting
for (int i = 0; i < kNumberBuffers; ++i) {
AQBufferCallback (this, mQueue, mBuffers[i]);
}
}
return AudioQueueStart(mQueue, NULL);
}
So, can the method play and AQPlayer::StartQueue be used to play the multiple files, how can the intervals be enforced, and how can the loop be repeated?
My adaptation of the code for the method 'record` is as follows, so you can see how the multiple files are being created.
- (IBAction)record:(id)sender
{
if (recorder->IsRunning()) // If we are currently recording, stop and save the file.
{
[self stopRecord];
}
else // If we're not recording, start.
{
self.counter = self.counter + 1 ; //Added *****
btn_play.enabled = NO;
// Set the button's state to "stop"
btn_record.title = #"Stop";
// Start the recorder
NSString *filename = [[NSString alloc] initWithFormat:#"recordedFile%d.caf",self.counter];
// recorder->StartRecord(CFSTR("recordedFile.caf"));
recorder->StartRecord((CFStringRef)filename);
[self setFileDescriptionForFormat:recorder->DataFormat() withName:#"Recorded File"];
// Hook the level meter up to the Audio Queue for the recorder
[lvlMeter_in setAq: recorder->Queue()];
}
}
Having spoken with a local "meetup" group on iOS I have learned that the easy solution to my question is to avoid AudioQueues and to instead use the "higher level" AVAudioRecorder and AVAudioPlayer from AVFoundation.
I also found how to partially test my app on the simulator with my Mac Mini: by plugging in an Olympus audio recorder with USB to my Mini as an input "voice". This works as an alternative to the iSight which does not produce an input audio on the Mini.

iPhone playback AudioQueue stops/fails to continue after pausing at a breakpoint during debugging

For some reason, it seems that stopping at a breakpoint during debugging will kill my audio queue playback.
AudioQueue will be playing audio
output.
Trigger a breakpoint to
pause my iPhone app.
Subsequent
resume, audio no longer gets played.
( However, AudioQueue callback
functions are still getting called.)
( No AudioSession or AudioQueue
errors are found.)
Since the debugger pauses the application (rather than an incoming phone call, for example) , it's not a typical iPhone interruption, so AudioSession interruption callbacks do not get triggered like in this solution.
I am using three AudioQueue buffers at 4096 samples at 22kHz and filling them in a circular manner.
Problem occurs for both multi-threaded and single-threaded mode.
Is there some known problem that you can't pause and resume AudioSessions or AudioQueues during a debugging session?
Is it running out of "queued buffers" and it's destroying/killing the AudioQueue object (but then my AQ callback shouldn't trigger).
Anyone have insight into inner workings of iPhone AudioQueues?
After playing around with it for the last several days, before posting to StackOverflow, I figured out the answer just today. Go figure!
Just recreate the AudioQueue again by calling my "preparation functions"
SetupNewQueue(mDataFormat.mSampleRate, mDataFormat.mChannelsPerFrame);
StartQueue(true);
So detect when your AudioQueue may have "died". In my case, I would be writing data into an input buffer to be "pulled" by AudioQueue callback. If it doesn't occur in a certain time, or after X number of bytes of input buffer have been filled, I then recreate the AudioQueue.
This seems to solve the issue where "halts/fails" audio when you hit a debugging breakpoint.
The simplified versions of these functions are the following:
void AQPlayer::SetupNewQueue(double inSampleRate, UInt32 inChannelsPerFrame)
{
//Prep AudioStreamBasicDescription
mDataFormat.mSampleRate = inSampleRate;
mDataFormat.SetCanonical(inChannelsPerFrame, YES);
XThrowIfError(AudioQueueNewOutput(&mDataFormat, AQPlayer::AQBufferCallback, this,
NULL, kCFRunLoopCommonModes, 0, &mQueue), "AudioQueueNew failed");
// adjust buffer size to represent about a half second of audio based on this format
CalculateBytesForTime(mDataFormat, kBufferDurationSeconds, &mBufferByteSize, &mNumPacketsToRead);
ctl->cmsg(CMSG_INFO, VERB_NOISY, "AQPlayer Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)mBufferByteSize, (int)mNumPacketsToRead);
mBufferWaitTime = mNumPacketsToRead / mDataFormat.mSampleRate * 0.9;
XThrowIfError(AudioQueueAddPropertyListener(mQueue, kAudioQueueProperty_IsRunning, isRunningProc, this), "adding property listener");
//Allocate AQ buffers (assume we are using CBR (constant bitrate)
for (int i = 0; i < kNumberBuffers; ++i) {
XThrowIfError(AudioQueueAllocateBuffer(mQueue, mBufferByteSize, &mBuffers[i]), "AudioQueueAllocateBuffer failed");
}
...
}
OSStatus AQPlayer::StartQueue(BOOL inResume)
{
// if we are not resuming, we also should restart the file read index
if (!inResume)
mCurrentPacket = 0;
// prime the queue with some data before starting
for (int i = 0; i < kNumberBuffers; ++i) {
mBuffers[i]->mAudioDataByteSize = mBuffers[i]->mAudioDataBytesCapacity;
memset( mBuffers[i]->mAudioData, 0, mBuffers[i]->mAudioDataByteSize );
XThrowIfError(AudioQueueEnqueueBuffer( mQueue,
mBuffers[i],
0,
NULL ),"AudioQueueEnqueueBuffer failed");
}
OSStatus status;
status = AudioSessionSetActive( true );
XThrowIfError(status, "\n\n*** AudioSession failed to become active *** \n\n");
status = AudioQueueStart(mQueue, NULL);
XThrowIfError(status, "\n\n*** AudioQueue failed to start *** \n\n");
return status;
}

Resources