no sound when using pjsip - ios

I've got a problem with pjsip. I'm trying to make an outgoing call with pjsua_call_make_call. It's working, but when I answer this call on a device, I can't hear any sound. However, I can see an icon on iPhone, indicating that a microphone is in use. Did anybody come across such issue?

I am having a similar issue. I place an outbound call and I can hear the audio on the device when I pick up the call, but can't hear any audio on the device using pjsip to make the call.
If your audio capture doesn't seem to be working make sure you have microphone permission, and you have to manually call pjsua_set_snd_dev(), when you connect. There's some other additional troubleshooting here https://trac.pjsip.org/repos/wiki/Getting-Started/iPhone#Commonproblems

I've no experience with iOS, but I suppose you should connect audio stream to some device in on_call_media_state callback (link). Look at minimal example from desktop app:
pjsua_call_info ci;
pjsua_call_get_info(call_id, &ci);
for (unsigned i = 0; i < ci.media_cnt; i++) {
if (ci.media[i].type == PJMEDIA_TYPE_AUDIO) {
if (ci.media[i].status == PJSUA_CALL_MEDIA_ACTIVE) {
pjsua_conf_connect(ci.conf_slot, 0);
pjsua_conf_connect(0, ci.conf_slot);
}
}
}
Edit:
iOS code for default audio stream in call:
var callinfo: pjsua_call_info = pjsua_call_info()
pjsua_call_get_info(call_id, &callinfo)
if(callinfo.media_status == PJSUA_CALL_MEDIA_ACTIVE) {
pjsua_conf_connect(callinfo.conf_slot, 0)
pjsua_conf_connect(0, callinfo.conf_slot)
}

Related

How to make audio/video calls and get the call type in on_incoming_call function in PJSIP in iOS?

I built PJSIP library with PJSUA_HAS_VIDEO as 1. I want to make an option to make audio only calls. I tried
pjsua_call_setting opt;
pjsua_call_setting_default(&opt);
opt.flag = PJSUA_CALL_INCLUDE_DISABLED_MEDIA;
opt.vid_cnt = 0;
opt.aud_cnt = 1;
pj_status_t status = pjsua_call_make_call((pjsua_acc_id)[self identifier], &uri, &opt, NULL, NULL, &callIdentifier);
At the receiving end, in on_incoming_call() function, I tried
if (callInfo.rem_offerer && callInfo.rem_vid_cnt == 1)
{
call.hasVideo = YES;
} else {
call.hasVideo = NO;
}
But rem_vid_cnt is always giving 1.
How can I set the call type while making call and receive it correctly at receiving end? I want to set the setHasVideo field of CallKit also at receiving end.
Thanks in advance.
At the app end your code is correct.
You need to disable video from server side also.
This is a two way communication. You can do this from the caller side set the rem_vid_cnt = 0 when you initiate a call, and at the receiver side you will get this as "0".
Hope this will help you :)
/** Number of video streams offered by remote */
unsigned rem_vid_cnt;

Remote Audio not connecting: iOS, PJSIP 2.6, CallKit, PJSUA2

I am updating an existing iOS VOIP application to use CallKit with PJSIP 2.6 and PJSUA2.
After some effort, the CallKit implementation seems to be working as expected. Incoming calls can be accepted or declined, and if accepted, will be connected and controlled with an in-app active call view controller.
The audio, however, does not appear to be properly connected at the pjsip end. There is no audio coming in from, or going out to the remote caller. The microphone audio appears to be routed back to the iPhone speaker.
The SIP audio ports should be connecting in callback function onCallMediaState:
virtual void onCallMediaState(OnCallMediaStateParam &prm) {
CallInfo ci = getInfo();
AudioMedia* audio_media = 0;
for (unsigned i = 0; i < ci.media.size(); i++) {
if (ci.media[i].type==PJMEDIA_TYPE_AUDIO && ( ci.media[i].status == PJSUA_CALL_MEDIA_ACTIVE ||
ci.media[i].status ==PJSUA_CALL_MEDIA_REMOTE_HOLD)) {
try {
audio_media = static_cast<AudioMedia*>(getMedia(i));
if(audio_media != 0)
{
Endpoint::instance().audDevManager().getCaptureDevMedia().startTransmit(*audio_media);
audio_media->startTransmit(Endpoint::instance().audDevManager().getPlaybackDevMedia());
}
} catch (std::exception ex) {
continue;
}
}
}
}
As described in Ticket#1941 at:
https://trac.pjsip.org/repos/ticket/1941:
I set the audio devices using:
ep->audDevManager().setNullDev();
immediately after the initialization of the Endpoint class (ep->libInit(epConfig);), and then:
I attempt to set the devices using pjsua_set_snd_dev() in CXProvider’s didActivate function, like this:
-(void) setSipSoundDevices {
pj_status_t status;
int captDev, playDev;
pjsua_get_snd_dev(&captDev, &playDev);
Endpoint::instance().audDevManager().setPlaybackDev(playDev);
Endpoint::instance().audDevManager().setCaptureDev(captDev);
}
pjsua_get_snd_dev(&captDev, &playDev) returns -99, -99 and the audio does not connect.
My question is this. How can I properly hook up the remote audio sources or ports, on an incoming call using PJSIP 2.6 and CallKit?
Might 2.5.5 work better in this regard?
Any insights are appreciated.
By and by I got the incoming call audio working properly. The crux of the matter was that even though the documentation from both Apple and SIP say that the audio has to be handled on the iOS end, you still have to set the SIP audio devices in the SIP layer in the provider delegate 'didActivate' and 'didDeactivate' functions. Because I use the PJSUA C++ layer, I had to drill down through the objc-c++ bridging layer to provide this functionality. ie.
-(void) activateSipSoundDevices {
pj_status_t status = pjsua_set_snd_dev(0, 0);
}
-(void) deactivateSipSoundDevices {
pj_status_t status = pjsua_set_null_snd_dev();
}
When initializing the SIP Account, be sure to set the null sound devices like:
ep->audDevManager().setNullDev();
Hope this helps.

AVAudioSession: Some Bluetooth devices are not working properly on my App

I'm developing a swift audio/video and text chat iOS App using AVAudioSession.
Whenever I select to use some Bluetooth devices the sound played on the device is not the App audio stream. They play only the system sound sent by text chat library whenever messages are sent/received instead. It doesn't happen on all Bluetooth devices, on some of them everything works fine. On Builtin Mic and Speaker the App works fine too.
Here are the most important methods from my class to manage the device:
class MyAudioSession
{
private var mAudioSession: AVAudioSession;
init!()
{
self.mAudioSession = AVAudioSession.sharedInstance();
do {
try self.mAudioSession.setActive(false);
try self.mAudioSession.setCategory(AVAudioSessionCategoryPlayAndRecord, withOptions: .AllowBluetooth);
try self.mAudioSession.setMode(AVAudioSessionModeVideoChat);
try self.mAudioSession.setActive(true);
}
catch {
return nil;
}
}
func switchToDevice(device: AVAudioSessionPortDescription!) -> Bool
{
var ret = false;
if (device != nil) {
do {
try self.mAudioSession.setPreferredInput(device);
ret = true;
}
catch {
self.logSwitch(device, error: error);
}
}
return ret;
}
}
I'd like to understand why my App is not working fine on just SOME Bluetooth devices. These same devices work properly on the other Apps on my Cel.
I did another test: I changed all of this for MPVolumeView, and exactly the same issue occurred, so the problem seems to be on audio player.
Could anybody give me a suggestion to fix this ?
Thx.
Jorg,
While this might not be the best answer I have been able to overcome the weird Bluetooth issues. My problem seems to be similar to yours as I too was using:
AVAudioSessionCategoryPlayAndRecord
This was causing issues for me on some Bluetooth devices (not all but some).
What I wound up doing was setting the Category to:
AVAudioSessionCategoryPlayback
Then when ever I needed to record I would switch the Category over to:
AVAudioSessionCategoryRecord
Then back to Playback after completing my recording.
This was the only way at this time I could get a consistent result from switching between the different outputs (Speaker, Headphones, Bluetooth).
Hope that helps some. Guessing this is a bug in the "AVAudioSessionCategoryPlayAndRecord"

play multiple SpeakHere audio files

By recording multiple snippets using filenames, I have attempted to record multiple separate short voice snippets in SpeakHere, I want to play them serially, separated by a set fixed interval of time between the starts of each snippet. I want the series of snippets to play in a loop forever, or until the user stops play.
My question is how do I alter SpeakHere to do so?
(I say "attempted" because I have not been able yet to run SpeakHere on my Mac Mini iPhone simulator. That is the subject of another question and because another question on the subject of multiple files has not been answered, either.)
In SpeakHereController.mm is the following method definition for playing a recorded file. Notice the final else clause calls player->StartQueue(false)
- (IBAction)play:(id)sender
{
if (player->IsRunning())
{ [snip]
}
else
{
OSStatus result = player->StartQueue(false);
if (result == noErr)
[[NSNotificationCenter defaultCenter] postNotificationName:#"playbackQueueResumed" object:self];
}
}
Below is an excerpt from SpeakHere AQPlayer.mm
OSStatus AQPlayer::StartQueue(BOOL inResume)
{
// if we have a file but no queue, create one now
if ((mQueue == NULL) && (mFilePath != NULL)) CreateQueueForFile(mFilePath);
mIsDone = false;
// if we are not resuming, we also should restart the file read index
if (!inResume) {
mCurrentPacket = 0;
// prime the queue with some data before starting
for (int i = 0; i < kNumberBuffers; ++i) {
AQBufferCallback (this, mQueue, mBuffers[i]);
}
}
return AudioQueueStart(mQueue, NULL);
}
So, can the method play and AQPlayer::StartQueue be used to play the multiple files, how can the intervals be enforced, and how can the loop be repeated?
My adaptation of the code for the method 'record` is as follows, so you can see how the multiple files are being created.
- (IBAction)record:(id)sender
{
if (recorder->IsRunning()) // If we are currently recording, stop and save the file.
{
[self stopRecord];
}
else // If we're not recording, start.
{
self.counter = self.counter + 1 ; //Added *****
btn_play.enabled = NO;
// Set the button's state to "stop"
btn_record.title = #"Stop";
// Start the recorder
NSString *filename = [[NSString alloc] initWithFormat:#"recordedFile%d.caf",self.counter];
// recorder->StartRecord(CFSTR("recordedFile.caf"));
recorder->StartRecord((CFStringRef)filename);
[self setFileDescriptionForFormat:recorder->DataFormat() withName:#"Recorded File"];
// Hook the level meter up to the Audio Queue for the recorder
[lvlMeter_in setAq: recorder->Queue()];
}
}
Having spoken with a local "meetup" group on iOS I have learned that the easy solution to my question is to avoid AudioQueues and to instead use the "higher level" AVAudioRecorder and AVAudioPlayer from AVFoundation.
I also found how to partially test my app on the simulator with my Mac Mini: by plugging in an Olympus audio recorder with USB to my Mini as an input "voice". This works as an alternative to the iSight which does not produce an input audio on the Mini.

AudioToolbox MusicPlayer change program has no effect

MIDI noob in training here...
I have been using MusicPlayer/MusicSequence/MusicTrack to play MIDI notes on devices running iOS. The notes are playing fine. I am struggling to change the instrument being played. As far as I can figure this is how to do it:
-(void) setInstrument:(MIDIInstruments) program channel:(int) channel MusicTrack:(MusicTrack*) track time:(float) time {
if(channel < 0 || channel > 15 || program >=MIDI_INSTRUMENT_COUNT || time < 0) {
return;
}
MIDIChannelMessage programChange = { ((UInt8)0xC) << 4 | ((UInt8)channel), ((UInt8)program), 0, 0};
OSStatus result = MusicTrackNewMIDIChannelEvent(*track, time, &programChange);
if(result != noErr) {
[NSException raise:#"Set Instrument" format:#"Failed to set instrument error: %#", [NSError errorWithDomain:NSOSStatusErrorDomain code:result userInfo:nil]];
}
}
In this case channel is 0 or 1, I tried several instruments through out the range of valid instrument enumerations, the time is 0.0, and the MusicTrack is valid, and has ~30 seconds of note events. The call to set the channel event passes back noErr. I am stumped...Anyone?
I had read in other posts that I would be able to generate midi using Music Player and friends. It provides for program changes. So, I had figured it was supported. After exhausting all theories, I turned to AUGraph. I added a *.sf2 file that I found online, instantiated the AUGraph, two AudioUnits, a MidiEndpointRef, and a MidiClientRef; according to this tutorial.
It was in the endpoint callback that I had to turn notes on and off using MusicDeviceMIDIEvent on the samplerUnit that seemed to allow for the program change. Whereas before I was just loading note events into a MusicTrack and playing/stoping the MusicPlayer.

Resources