When I am on sip call, sometimes I want to send dtmf digits.
To do this I created a custom dial pad which when a key is pressed should play a sound of that key, but it is not playing that sound during a sip call (when there is no call, sound is played).
These sounds are played with functions from AudioToolbox.h library (AudioServicesPlaySystemSound(soundID)).
Is there some property that I need to set up in pjsip (pjsua) or in AudioToolbox library to enable a sound be played during a sip call?
I know this is possible (Bria has this, Groundwire also, not sure if they are using pjsip to implement sip).
This answer is combination of code snippets from these two links: PJSUA-API Media Manipulation and pjsipDll_PlayWav.cpp.
When pjsua makes a call it is using ports (conference ports) for transferring media from/to the call destination to your device speaker. You can have multiple ports opened at the same moment.
So what we are going to do to play our keypad button click sound is to open one more port and play a sound (in this case it is a wav file, and as you can notice there are also a pjsua function for streaming avi files).
To do this we are going to use this function:
pj_status_t pjsua_conf_connect (pjsua_conf_port_id source, pjsua_conf_port_id sink)
where our sink port is our device speaker port, in this case (and mostly) it is 0.
All functions below are added to pjsua_app.c file.
Before the place where they are used in the Objective-C class you have to add a line like this:
pj_status_t play_sound_during_call(pj_str_t sound_file);
To play a sound here is the function:
pj_status_t play_sound_during_call(pj_str_t sound_file)
{
pjsua_player_id player_id;
pj_status_t status;
status = pjsua_player_create(&sound_file, 0, &player_id);
if (status != PJ_SUCCESS)
return status;
pjmedia_port *player_media_port;
status = pjsua_player_get_port(player_id, &player_media_port);
if (status != PJ_SUCCESS)
{
return status;
}
pj_pool_t *pool = pjsua_pool_create("my_eof_data", 512, 512);
struct pjsua_player_eof_data *eof_data = PJ_POOL_ZALLOC_T(pool, struct pjsua_player_eof_data);
eof_data->pool = pool;
eof_data->player_id = player_id;
pjmedia_wav_player_set_eof_cb(player_media_port, eof_data, &on_pjsua_wav_file_end_callback);
status = pjsua_conf_connect(pjsua_player_get_conf_port(player_id), 0);
if (status != PJ_SUCCESS)
{
return status;
}
return status;
}
And here is the callback function that listens when your wav file reading (playing) has ended:
struct pjsua_player_eof_data
{
pj_pool_t *pool;
pjsua_player_id player_id;
};
static PJ_DEF(pj_status_t) on_pjsua_wav_file_end_callback(pjmedia_port* media_port, void* args)
{
pj_status_t status;
struct pjsua_player_eof_data *eof_data = (struct pjsua_player_eof_data *)args;
status = pjsua_player_destroy(eof_data->player_id);
PJ_LOG(3,(THIS_FILE, "End of Wav File, media_port: %d", media_port));
if (status == PJ_SUCCESS)
{
return -1;// Here it is important to return a value other than PJ_SUCCESS
//Check link below
}
return PJ_SUCCESS;
}
The reason why the pjmedia_wav_player_set_eof_cb callback function should return a value other then PJ_SUCCESS is because the documentation here pjmedia_wav_player_set_eof_cb says:
Note that if the application destroys the file port in the callback, it must return non-PJ_SUCCESS here.
Related
Implement audio call using pjsip working proper but not working video call.
i applied following changes :
//Sip init
pj_status_t sip_startup(app_config_t *app_config)
{
pjsua_config cfg;
pjsua_config_default (&cfg);
cfg.cb.on_incoming_call = &on_incoming_call;
cfg.cb.on_call_media_state = &on_call_media_state;
cfg.cb.on_call_state = &on_call_state;
cfg.cb.on_reg_state2 = &on_reg_state2;
cfg.cb.on_call_media_event = &on_call_media_event;
// Init the logging config structure
pjsua_logging_config log_cfg;
pjsua_logging_config_default(&log_cfg);
log_cfg.console_level = 4;
// Init PJ Media
pjsua_media_config me_cfg;
pjsua_media_config_default(&me_cfg);
// Init the pjsua
status = pjsua_init(&cfg, &log_cfg, &me_cfg);
if (status != PJ_SUCCESS) error_exit("Error in pjsua_init()", status);
}
//following code add when apply sip connection
pjsua_call_setting _call_setting;
pjsua_call_setting_default(&_call_setting);
_call_setting.aud_cnt = 1;
_call_setting.vid_cnt = 1;
//when press call button from app call this funtion for video call.
pj_status_t sip_dial(pjsua_acc_id acc_id, const char *number,
pjsua_call_id *call_id)
{
pj_status_t status;
pj_str_t uri = pj_str(destUri);
status = pjsua_call_make_call(_acc_id, &uri, &(_call_setting),
NULL, NULL, NULL);
if (status != PJ_SUCCESS)
error_exit("Error making call", status);
}
//Apply changes related to video code
static void on_call_media_state(pjsua_call_id call_id)
{
pjsua_call_info ci;
unsigned mi;
pjsua_call_get_info(call_id, &ci);
sip_ring_stop([SharedAppDelegate.aVoipManager pjsipConfig]);
if(ci.media_status == PJMEDIA_TYPE_VIDEO)
{
NSLog(#"windows id : %d",ci.media[mi].stream.vid.win_in);
NSLog(#"media id : %d",mi);
if (ci.media_status != PJSUA_CALL_MEDIA_ACTIVE)
return;
[[XCPjsua sharedXCPjsua]
displayWindow:ci.media[mi].stream.vid.win_in];
}
}
i applied above code but not place video call using pjsip.
Any one have idea or steps related to video call then please help me.
Thank you
This subject is too large, I think you need to refine your questions to a smaller more specific question if you wish to get a good answer.
Make sure you have read and understood the pjsip video support:
PJSip Video_Users_Guide
PJSIP IOS Video Support
I would look for what other people have done (even if it's on another platform, e.g. Android, Windows, etc.) and the look into the pjsip pjsua sample which I believe has video support but I'm not sure if it support ios video.
Get a known good examples of pjsip video calls going so you know that it looks like and what the logs look like when it works.
Then try against your ios code against the known good example clients to see where they differ. If you can't figure it out at least you should have enough info to be able to ask a more specific question about a specific situation that is not working for you.
I am working on audio and video call feature in my application I got success to make call as a audio but I am stuck on video calling. For video calling I am using following code.
pjsua_call_setting opt;
pjsua_call_setting_default(&opt);
opt.aud_cnt = 1;
opt.vid_cnt = 1;
char *destUri = "sip:XXXXXX#sipserver";
pj_status_t status;
pj_str_t uri = pj_str(destUri);
status = pjsua_call_make_call(voipManager._sip_acc_id, &uri,&opt,
NULL, NULL, NULL);
if (status != PJ_SUCCESS)
NSLog(#"%d",status);
else
NSLog(#"%d",status);
When the pjsua_call_make_call function is perform it shows me the error which is:
Assertion failed: (opt->vid_cnt == 0), function apply_call_setting, file ../src/pjsua-lib/pjsua_call.c, line 606.
You must build the lib for video support.
To enable video, append this into config_site.h:
#define PJMEDIA_HAS_VIDEO 1
What you are getting is assertion error for checking video support
I'm using MusicPlayer to play notes in MusicSequence:
NewMusicSequence(&sequence);
MusicSequenceFileLoad(sequence, (__bridge CFURLRef) midiFileURL, 0, 0);
// Set the endpoint of the sequence to be our virtual endpoint
MusicSequenceSetMIDIEndpoint(sequence, virtualEndpoint);
// Create a new music player
MusicPlayer p;
// Initialise the music player
NewMusicPlayer(&p);
// Load the sequence into the music player
MusicPlayerSetSequence(self.player, sequence);
// Called to do some MusicPlayer setup. This just
// reduces latency when MusicPlayerStart is called
MusicPlayerPreroll(self.player);
-(void)play {
MusicPlayerStart(self.player);
}
It's working well, I would say very well, but I do not want to use the internal clock.
How can I use the external midi clock?
Or maybe I can somehow move the playing cursor with a clock.
You can use MusicSequenceSetMIDIEndpoint(sequence,endpointRef);
then create a midi clock
CAClockRef mtcClockRef;
OSStatus err;
err = CAClockNew(0, &mtcClockRef);
if (err != noErr) {
NSLog(#"\t\terror %ld at CAClockNew()", err);
}
else {
CAClockTimebase timebase = kCAClockTimebase_HostTime;
UInt32 size = 0;
size = sizeof(timebase);
err = CAClockSetProperty(mtcClockRef, kCAClockProperty_InternalTimebase, size, &timebase);
if (err)
NSLog(#"Error setting clock timebase");
set the sync mode
UInt32 tSyncMode = kCAClockSyncMode_MIDIClockTransport;
size = sizeof(tSyncMode);
err = CAClockSetProperty(mtcClockRef, kCAClockProperty_SyncMode, size, &tSyncMode);
then set the clock to use the midi end point
err = CAClockSetProperty(mtcClockRef, kCAClockProperty_SyncSource, sizeof(endpointRef), endpointRef);
There's some reference code VVMIDINode here - > https://github.com/mrRay/vvopensource/blob/master/VVMIDI/FrameworkSrc/VVMIDINode.h
I am writing an HOST app that uses Core Audio's new iOS 7 Inter App Audio technology to pull audio from a single NODE "generator" app and route it into my host app. I am using the Audio Components Services and Audio Unit Component Services C frameworks to achieve this.
What I want to achieve is to establish a connection to an external node app who can generate sound. I want that sound to be routed into my host app and for my host app to be able to directly access the audio packet data as a stream of raw audio data.
I have written the code inside my HOST app that does the following sequentially:
Sets up and activates an audio session with the correct session category.
Refreshes a list of interapp audio compatible apps who are of typekAudioUnitType_RemoteGenerator or kAudioUnitType_RemoteInstrument (I'm not interested in effects apps).
Pulls out the last object from that list and attempts to establish a connection using AudioComponentInstanceNew()
Sets the Audio Stream Basic Description that my host app needs the audio format in.
Sets up audio unit properties and callbacks as well as an audio unit render callback on the output scope (bus).
Initializes the audio unit.
So far so good, I have been able to successfully establish a connection, but my problem is that my render callback is not being called at all. What I am having trouble understanding is how exactly to pull the audio from the node application? I have read that I need to call AudioUnitRender() in order to initiate a rendering cycle on the node app, but how exactly does this need to be setup in my situation? I have seen other examples where AudioUnitRender() is called from inside the rendering callback, but this isnt going to work for me because my render callback isn't being called currently. Do I need to setup up my own audio processing thread and periodically call AudioUnitRender() on my 'node'?
The following is the code described above inside my HOST app.
static OSStatus MyAURenderCallback (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
//Do something here with the audio data?
//This method is never being called?
//Do I need to puts AudioUnitRender() in here?
}
- (void)start
{
[self configureAudioSession];
[self refreshAUList];
}
- (void)configureAudioSession
{
NSError *audioSessionError = nil;
AVAudioSession *mySession = [AVAudioSession sharedInstance];
[mySession setPreferredSampleRate: _graphSampleRate error: &audioSessionError];
[mySession setCategory: AVAudioSessionCategoryPlayAndRecord error: &audioSessionError];
[mySession setActive: YES error: &audioSessionError];
self.graphSampleRate = [mySession sampleRate];
}
- (void)refreshAUList
{
_audioUnits = #[].mutableCopy;
AudioComponentDescription searchDesc = { 0, 0, 0, 0, 0 }, foundDesc;
AudioComponent comp = NULL;
while (true) {
comp = AudioComponentFindNext(comp, &searchDesc);
if (comp == NULL) break;
if (AudioComponentGetDescription(comp, &foundDesc) != noErr) continue;
if (foundDesc.componentType == kAudioUnitType_RemoteGenerator || foundDesc.componentType == kAudioUnitType_RemoteInstrument) {
RemoteAU *rau = [[RemoteAU alloc] init];
rau->_desc = foundDesc;
rau->_comp = comp;
AudioComponentCopyName(comp, &rau->_name);
rau->_image = AudioComponentGetIcon(comp, 48);
rau->_lastActiveTime = AudioComponentGetLastActiveTime(comp);
[_audioUnits addObject:rau];
}
}
[self connect];
}
- (void)connect {
if ([_audioUnits count] <= 0) {
return;
}
RemoteAU *rau = [_audioUnits lastObject];
AudioUnit myAudioUnit;
//Node application will get launched in background
Check(AudioComponentInstanceNew(rau->_comp, &myAudioUnit));
AudioStreamBasicDescription format = {0};
format.mChannelsPerFrame = 2;
format.mSampleRate = [[AVAudioSession sharedInstance] sampleRate];
format.mFormatID = kAudioFormatMPEG4AAC;
UInt32 propSize = sizeof(format);
Check(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &propSize, &format));
//Output format from node to host
Check(AudioUnitSetProperty(myAudioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &format, sizeof(format)));
//Setup a render callback to the output scope of the audio unit representing the node app
AURenderCallbackStruct callbackStruct = {0};
callbackStruct.inputProc = MyAURenderCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
Check(AudioUnitSetProperty(myAudioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, 0, &callbackStruct, sizeof(callbackStruct)));
//setup call backs
Check(AudioUnitAddPropertyListener(myAudioUnit, kAudioUnitProperty_IsInterAppConnected, IsInterappConnected, NULL));
Check(AudioUnitAddPropertyListener(myAudioUnit, kAudioOutputUnitProperty_HostTransportState, AudioUnitPropertyChangeDispatcher, NULL));
//intialize the audio unit representing the node application
Check(AudioUnitInitialize(myAudioUnit));
}
So, basically I want to play some audio files (mp3 and caf mostly). But the callback never gets called. Only when I call them to prime the queue.
Here's my data struct:
struct AQPlayerState
{
CAStreamBasicDescription mDataFormat;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[kBufferNum];
AudioFileID mAudioFile;
UInt32 bufferByteSize;
SInt64 mCurrentPacket;
UInt32 mNumPacketsToRead;
AudioStreamPacketDescription *mPacketDescs;
bool mIsRunning;
};
Here's my callback function:
static void HandleOutputBuffer (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer)
{
NSLog(#"HandleOutput");
AQPlayerState *pAqData = (AQPlayerState *) aqData;
if (pAqData->mIsRunning == false) return;
UInt32 numBytesReadFromFile;
UInt32 numPackets = pAqData->mNumPacketsToRead;
AudioFileReadPackets (pAqData->mAudioFile,
false,
&numBytesReadFromFile,
pAqData->mPacketDescs,
pAqData->mCurrentPacket,
&numPackets,
inBuffer->mAudioData);
if (numPackets > 0) {
inBuffer->mAudioDataByteSize = numBytesReadFromFile;
AudioQueueEnqueueBuffer (pAqData->mQueue,
inBuffer,
(pAqData->mPacketDescs ? numPackets : 0),
pAqData->mPacketDescs);
pAqData->mCurrentPacket += numPackets;
} else {
// AudioQueueStop(pAqData->mQueue, false);
// AudioQueueDispose(pAqData->mQueue, true);
// AudioFileClose (pAqData->mAudioFile);
// free(pAqData->mPacketDescs);
// free(pAqData->mFloatBuffer);
pAqData->mIsRunning = false;
}
}
And here's my method:
- (void)playFile
{
AQPlayerState aqData;
// get the source file
NSString *p = [[NSBundle mainBundle] pathForResource:#"1_Female" ofType:#"mp3"];
NSURL *url2 = [NSURL fileURLWithPath:p];
CFURLRef srcFile = (__bridge CFURLRef)url2;
OSStatus result = AudioFileOpenURL(srcFile, 0x1/*fsRdPerm*/, 0/*inFileTypeHint*/, &aqData.mAudioFile);
CFRelease (srcFile);
CheckError(result, "Error opinning sound file");
UInt32 size = sizeof(aqData.mDataFormat);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyDataFormat, &size, &aqData.mDataFormat),
"Error getting file's data format");
CheckError(AudioQueueNewOutput(&aqData.mDataFormat, HandleOutputBuffer, &aqData, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aqData.mQueue),
"Error AudioQueueNewOutPut");
// we need to calculate how many packets we read at a time and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
{
bool isFormatVBR = (aqData.mDataFormat.mBytesPerPacket == 0 || aqData.mDataFormat.mFramesPerPacket == 0);
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
UInt32 maxPacketSize;
size = sizeof(maxPacketSize);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize),
"Error getting max packet size");
// adjust buffer size to represent about a second of audio based on this format
CalculateBytesForTime(aqData.mDataFormat, maxPacketSize, 1.0/*seconds*/, &aqData.bufferByteSize, &aqData.mNumPacketsToRead);
if (isFormatVBR) {
aqData.mPacketDescs = new AudioStreamPacketDescription [aqData.mNumPacketsToRead];
} else {
aqData.mPacketDescs = NULL; // we don't provide packet descriptions for constant bit rate formats (like linear PCM)
}
printf ("Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)aqData.bufferByteSize, (int)aqData.mNumPacketsToRead);
}
// if the file has a magic cookie, we should get it and set it on the AQ
size = sizeof(UInt32);
result = AudioFileGetPropertyInfo(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, NULL);
if (!result && size) {
char* cookie = new char [size];
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, cookie),
"Error getting cookie from file");
CheckError(AudioQueueSetProperty(aqData.mQueue, kAudioQueueProperty_MagicCookie, cookie, size),
"Error setting cookie to file");
delete[] cookie;
}
aqData.mCurrentPacket = 0;
for (int i = 0; i < kBufferNum; ++i) {
CheckError(AudioQueueAllocateBuffer (aqData.mQueue,
aqData.bufferByteSize,
&aqData.mBuffers[i]),
"Error AudioQueueAllocateBuffer");
HandleOutputBuffer (&aqData,
aqData.mQueue,
aqData.mBuffers[i]);
}
// set queue's gain
Float32 gain = 1.0;
CheckError(AudioQueueSetParameter (aqData.mQueue,
kAudioQueueParam_Volume,
gain),
"Error AudioQueueSetParameter");
aqData.mIsRunning = true;
CheckError(AudioQueueStart(aqData.mQueue,
NULL),
"Error AudioQueueStart");
}
And the output when I press play:
Buffer Byte Size: 40310, Num Packets to Read: 38
HandleOutput start
HandleOutput start
HandleOutput start
I tryed replacing CFRunLoopGetCurrent() with CFRunLoopGetMain() and CFRunLoopCommonModes with CFRunLoopDefaultMode, but nothing.
Shouldn't the primed buffers start playing right away I start the queue?
When I start the queue, no callbacks are bang fired.
What am I doing wrong? Thanks for any ideas
What you are basically trying to do here is a basic example of audio playback using Audio Queues. Without looking at your code in detail to see what's missing (that could take a while) i'd rather recommend to you to follow the steps in this basic sample code that does exactly what you're doing (without the extras that aren't really relevant.. for example why are you trying to add audio gain?)
Somewhere else you were trying to play audio using audio units. Audio units are more complex than basic audio queue playback, and I wouldn't attempt them before being very comfortable with audio queues. But you can look at this example project for a basic example of audio queues.
In general when it comes to Core Audio programming in iOS, it's best you take your time with the basic examples and build your way up.. the problem with a lot of tutorials online is that they add extra stuff and often mix it with obj-c code.. when Core Audio is purely C code (ie the extra stuff won't add anything to the learning process). I strongly recommend you go over the book Learning Core Audio if you haven't already. All the sample code is available online, but you can also clone it from this repo for convenience. That's how I learned core audio. It takes time :)