Detecting Silent Switch in iOS 7 issue - ios

I am using following code to check iPhone silent switch is ON or OFF :-
if (self)
{
self.detector = [SharkfoodMuteSwitchDetector shared];
CheckInViewController* sself = self;
self.detector.silentNotify = ^(BOOL silent)
{
[sself.silentSwitch setOn:silent animated:YES];
};
}
It works fine in iOS 6 and below but in iOS 7 it always gives TRUE value. So, Please any one tell, how to resolve this issue.
Thanks in advance.

It doesn't work in iOS 7, and it never really worked in iOS 6 if you look at why it doesn't work in iOS 7. This solution is based on the same code, so all credit to the original author though.
Keep mute.caf from your SharkfoodMuteSwitchDetector.
Create a new class, called HASilentSwitchDetector (or whatever), or replace the code in SharkfoodMuteSwitchDetector.
In the header file:
#import <AudioToolbox/AudioToolbox.h>
typedef void(^HASilentSwitchDetectorBlock)(BOOL success, BOOL silent);
#interface HASilentSwitchDetector : NSObject
+ (void)ifMute:(HASilentSwitchDetectorBlock)then;
#end
In the implementation file:
#import "HASilentSwitchDetector.h"
void MuteSoundPlaybackComplete(SystemSoundID ssID, void *clientData)
{
//Completion
NSDictionary *soundCompletion = CFBridgingRelease(clientData);
//Mute
NSTimeInterval interval = [soundCompletion[#"interval"] doubleValue];
NSTimeInterval elapsed = [NSDate timeIntervalSinceReferenceDate] - interval;
BOOL isMute = elapsed < 0.2; // mute.caf is .2s long...
//Then
HASilentSwitchDetectorBlock then = soundCompletion[#"then"];
then(YES, isMute);
//Cleanup
SystemSoundID soundID = [soundCompletion[#"soundID"] integerValue];
AudioServicesRemoveSystemSoundCompletion(soundID);
AudioServicesDisposeSystemSoundID(soundID);
}
#implementation HASilentSwitchDetector
+ (void)ifMute:(HASilentSwitchDetectorBlock)then
{
//Check
if ( !then ) {
return;
}
//Create
NSURL *url = [[NSBundle mainBundle] URLForResource:#"mute" withExtension:#"caf"];
SystemSoundID soundID;
if ( AudioServicesCreateSystemSoundID((__bridge CFURLRef)url, &soundID) == kAudioServicesNoError ) {
//UI Sound
UInt32 yes = 1;
AudioServicesSetProperty(kAudioServicesPropertyIsUISound, sizeof(soundID), &soundID,sizeof(yes), &yes);
//Callback
NSDictionary *soundCompletion = #{#"then" : [then copy], #"soundID" : #(soundID), #"interval" : #([NSDate timeIntervalSinceReferenceDate])};
AudioServicesAddSystemSoundCompletion(soundID, CFRunLoopGetMain(), kCFRunLoopDefaultMode, MuteSoundPlaybackComplete, (void *)CFBridgingRetain(soundCompletion));
//Play
AudioServicesPlaySystemSound(soundID);
} else {
//Fail
then(NO, NO);
}
}
#end
Use like so:
[HASilentSwitchDetector ifMute:^(BOOL success, BOOL silent) {
if ( success ) {
if ( ![[NSUserDefaults standardUserDefaults] boolForKey:forKey:kHasShownMuteWarning] && silent ) {
[[NSUserDefaults standardUserDefaults] setBool:YES forKey:kHasShownMuteWarning];
[[[UIAlertView alloc] initWithTitle:[#"Mute Warning" localized] message:[NSString stringWithFormat:[#"This %#'s mute switch is on. To ensure your alarm will be audible, unmute your device." localized], [[[UIDevice currentDevice] isiPad]? #"iPad" : #"iPhone" localized]] delegate:nil cancelButtonTitle:nil otherButtonTitles:[#"Ok" localized], nil] show];
}
}
}];

Related

No known class method for selector 'presentCheckoutForSubmittingTransactionCompletionHandler:cancelHandler:'

I am trying to integrate Hyperpay payment into React Native project and I have problems with objective-c, I followed an article and found many issues and with searching, I solve them, but still two issues I can't solve because I am not familiar with objective-c
Issue 1,
No known class method for selector 'presentCheckoutForSubmittingTransactionCompletionHandler:cancelHandler:'
Issue 2,
No known class method for selector 'dismissCheckoutAnimated:completion:'
I am sorry if my code is long but I don't to miss something
// RCTCalendarModule.m
#import "HyperPay.h"
#import "UIKit/UIKit.h"
#import <OPPWAMobile/OPPWAMobile.h>
#implementation HyperPay{
RCTResponseSenderBlock onDoneClick;
RCTResponseSenderBlock onCancelClick;
UIViewController *rootViewController;
NSString *isRedirect;
OPPPaymentProvider *provider;
}
// To export a module named RCTCalendarModule
RCT_EXPORT_METHOD(openHyperPay:(NSDictionary *)indic createDialog:(RCTResponseSenderBlock)doneCallback createDialog:(RCTResponseSenderBlock)cancelCallback) {
onDoneClick = doneCallback;
onCancelClick = cancelCallback;
NSArray *events = #[];
if ([indic[#"is_sandbox"] isEqualToString:#"1"]) {
provider = [OPPPaymentProvider paymentProviderWithMode:OPPProviderModeTest];
} else {
provider = [OPPPaymentProvider paymentProviderWithMode:OPPProviderModeLive];
}
OPPCheckoutSettings *checkoutSettings = [[OPPCheckoutSettings alloc] init];
// Set available payment brands for your shop
checkoutSettings.paymentBrands = #[#"VISA", #"MASTER"];
// Set shopper result URL
checkoutSettings.shopperResultURL = #"com.simicart.enterprise.payments://result";
OPPCheckoutProvider *checkoutProvider = [OPPCheckoutProvider checkoutProviderWithPaymentProvider:provider checkoutID:indic[#"checkoutId"]
settings:checkoutSettings];
dispatch_async(dispatch_get_main_queue(), ^{
[OPPCheckoutProvider presentCheckoutForSubmittingTransactionCompletionHandler:^(OPPTransaction * _Nullable transaction, NSError * _Nullable error) {
if (error) {
// Executed in case of failure of the transaction for any reason
if (isRedirect && ![isRedirect isEqualToString:#"1"]) {
onCancelClick(#[#"cancel", events]);
}
} else if (transaction.type == OPPTransactionTypeSynchronous) {
// Send request to your server to obtain the status of the synchronous transaction
// You can use transaction.resourcePath or just checkout id to do it
NSDictionary *responeDic = #{#"resourcePath" : transaction.resourcePath};
onDoneClick(#[responeDic, events]);
NSLog(#"%#", transaction.resourcePath);
} else {
// The SDK opens transaction.redirectUrl in a browser
// See 'Asynchronous Payments' guide for more details
}
} cancelHandler:^{
onCancelClick(#[#"cancel", events]);
// Executed if the shopper closes the payment page prematurely
}];
});
}
- (instancetype)init{
self = [super init];
if (self) {
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(getStatusOder:) name:#"getStatusOrder" object:nil];
}
return self;
}
- (void)getStatusOder:(NSNotification*)noti{
[OPPCheckoutProvider dismissCheckoutAnimated:YES completion:^{
isRedirect = #"1";
NSURL *url = noti.object;
NSString *urlString = [url absoluteString];
NSLog(#"%#", urlString);
if (![urlString isEqualToString:#"com.simicart.enterprise.payments://result"]) {
NSArray *events = #[];
NSDictionary *responeDic = #{#"url" : urlString};
onDoneClick(#[responeDic, events]);
}
}];
}
#end

iOS SWIFT - WebRTC change from Front Camera to back Camera

WebRTC video by default uses Front Camera, which works fine. However, i need to switch it to back camera, and i have not been able to find any code to do that.
Which part do i need to edit?
Is it the localView or localVideoTrack or capturer?
Swift 3.0
Peer connection can have only one 'RTCVideoTrack' for sending video stream.
At first, for change camera front/back you must remove current video track on peer connection.
After then, you create new 'RTCVideoTrack' on camera which you need, and set this for peer connection.
I used this methods.
func swapCameraToFront() {
let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
let localVideoTrack: RTCVideoTrack? = createLocalVideoTrack()
if localVideoTrack != nil {
localStream?.addVideoTrack(localVideoTrack)
delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
}
peerConnection?.remove(localStream)
peerConnection?.add(localStream)
}
func swapCameraToBack() {
let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
let localVideoTrack: RTCVideoTrack? = createLocalVideoTrackBackCamera()
if localVideoTrack != nil {
localStream?.addVideoTrack(localVideoTrack)
delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
}
peerConnection?.remove(localStream)
peerConnection?.add(localStream)
}
As of now I only have the answer in Objective C language in regard to Ankit's comment below. I will convert it into Swift after some time.
You can check the below code
- (RTCVideoTrack *)createLocalVideoTrack {
RTCVideoTrack *localVideoTrack = nil;
NSString *cameraID = nil;
for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if (captureDevice.position == AVCaptureDevicePositionFront) {
cameraID = [captureDevice localizedName]; break;
}
}
RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID];
RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints];
RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints];
localVideoTrack = [_factory videoTrackWithID:#"ARDAMSv0" source:videoSource];
return localVideoTrack;
}
- (RTCVideoTrack *)createLocalVideoTrackBackCamera {
RTCVideoTrack *localVideoTrack = nil;
//AVCaptureDevicePositionFront
NSString *cameraID = nil;
for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if (captureDevice.position == AVCaptureDevicePositionBack) {
cameraID = [captureDevice localizedName];
break;
}
}
RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID];
RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints];
RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints];
localVideoTrack = [_factory videoTrackWithID:#"ARDAMSv0" source:videoSource];
return localVideoTrack;
}
If you decide to use official Google build here the explanation:
First, you must configure your camera before call start, best place to do that in ARDVideoCallViewDelegate in method didCreateLocalCapturer
- (void)startCapture:(void (^)(BOOL succeeded))completionHandler {
AVCaptureDevicePosition position = _usingFrontCamera ? AVCaptureDevicePositionFront : AVCaptureDevicePositionBack;
__weak AVCaptureDevice *device = [self findDeviceForPosition:position];
if ([device lockForConfiguration:nil]) {
if ([device isFocusPointOfInterestSupported]) {
[device setFocusModeLockedWithLensPosition:0.9 completionHandler: nil];
}
}
AVCaptureDeviceFormat *format = [self selectFormatForDevice:device];
if (format == nil) {
RTCLogError(#"No valid formats for device %#", device);
NSAssert(NO, #"");
return;
}
NSInteger fps = [self selectFpsForFormat:format];
[_capturer startCaptureWithDevice: device
format: format
fps:fps completionHandler:^(NSError * error) {
NSLog(#"%#",error);
if (error == nil) {
completionHandler(true);
}
}];
}
Don't forget enabling capture device is asynchronous, sometime better to use completion to be sure everything done as expected.
I am not sure which chrome version you are using for webrtc but with v54 and above there is "bool" property called "useBackCamera" in RTCAVFoundationVideoSource class. You can make use of this property to switch between front/back camera.
Swift 4.0 & 'GoogleWebRTC' : '1.1.20913'
RTCAVFoundationVideoSource class has a property named useBackCamera that can be used for switching the camera used.
#interface RTCAVFoundationVideoSource : RTCVideoSource
- (instancetype)init NS_UNAVAILABLE;
/**
* Calling this function will cause frames to be scaled down to the
* requested resolution. Also, frames will be cropped to match the
* requested aspect ratio, and frames will be dropped to match the
* requested fps. The requested aspect ratio is orientation agnostic and
* will be adjusted to maintain the input orientation, so it doesn't
* matter if e.g. 1280x720 or 720x1280 is requested.
*/
- (void)adaptOutputFormatToWidth:(int)width height:(int)height fps:(int)fps;
/** Returns whether rear-facing camera is available for use. */
#property(nonatomic, readonly) BOOL canUseBackCamera;
/** Switches the camera being used (either front or back). */
#property(nonatomic, assign) BOOL useBackCamera;
/** Returns the active capture session. */
#property(nonatomic, readonly) AVCaptureSession *captureSession;
Below is the implementation for switching camera.
var useBackCamera: Bool = false
func switchCamera() {
useBackCamera = !useBackCamera
self.switchCamera(useBackCamera: useBackCamera)
}
private func switchCamera(useBackCamera: Bool) -> Void {
let localStream = peerConnection?.localStreams.first
if let videoTrack = localStream?.videoTracks.first {
localStream?.removeVideoTrack(videoTrack)
}
let localVideoTrack = createLocalVideoTrack(useBackCamera: useBackCamera)
localStream?.addVideoTrack(localVideoTrack)
self.delegate?.webRTCClientDidAddLocal(videoTrack: localVideoTrack)
if let ls = localStream {
peerConnection?.remove(ls)
peerConnection?.add(ls)
}
}
func createLocalVideoTrack(useBackCamera: Bool) -> RTCVideoTrack {
let videoSource = self.factory.avFoundationVideoSource(with: self.constraints)
videoSource.useBackCamera = useBackCamera
let videoTrack = self.factory.videoTrack(with: videoSource, trackId: "video")
return videoTrack
}
In the current version of WebRTC, RTCAVFoundationVideoSource has been deprecated and replaced with a
generic RTCVideoSource combined with an RTCVideoCapturer implementation.
In order to switch the camera I'm doing this:
- (void)switchCameraToPosition:(AVCaptureDevicePosition)position completionHandler:(void (^)(void))completionHandler {
if (self.cameraPosition != position) {
RTCMediaStream *localStream = self.peerConnection.localStreams.firstObject;
[localStream removeVideoTrack:self.localVideoTrack];
//[self.peerConnection removeStream:localStream];
self.localVideoTrack = [self createVideoTrack];
[self startCaptureLocalVideoWithPosition:position completionHandler:^{
[localStream addVideoTrack:self.localVideoTrack];
//[self.peerConnection addStream:localStream];
if (completionHandler) {
completionHandler();
}
}];
self.cameraPosition = position;
}
}
Take a look at the commented lines, If you start adding/removing the stream from the peer connection it will cause a delay in the video connection.
I'm using GoogleWebRTC-1.1.25102

iOS 9 How to Detect Silent Mode?

As AudioSessionInitialize and AudioSessionGetProperty are deprecated, I am getting the wrong return values:
CFStringRef state = nil;
UInt32 propertySize = sizeof(CFStringRef);
AudioSessionInitialize(NULL, NULL, NULL, NULL);
OSStatus status = AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &propertySize, &state);
[[AVAudioSession sharedInstance] setActive:YES error:nil];
if (status == kAudioSessionNoError) {
return CFStringGetLength(state) == 0; // YES = silent
}
return NO;
From this code (I found it here), I get the same incorrect result no matter what state is actually device on. How can I detect if the silent mode is ON on device right now?
The API is no longer available. But the work around is simple:
Play a short audio and detect time that it finishes playing
If the time that it finishes playing is shorter than the actual length of the audio, than the device is muted
Hoishing posted a helper class MuteChecker on his blog. Use it as the following:
self.muteChecker = [[MuteChecker alloc] initWithCompletionBlk:^(NSTimeInterval lapse, BOOL muted) {
NSLog(#"muted: %d", muted);
}];
[self.muteChecker check];
This is the complete code for the class, you can simple copy past to your project:
MuteChecker.h
#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
typedef void (^MuteCheckCompletionHandler)(NSTimeInterval lapse, BOOL muted);
// this class must use with a MuteChecker.caf (a 0.2 sec mute sound) in Bundle
#interface MuteChecker : NSObject
-(instancetype)initWithCompletionBlk:(MuteCheckCompletionHandler)completionBlk;
-(void)check;
#end
MuteChecker.cpp
#import "MuteChecker.h"
void MuteCheckCompletionProc(SystemSoundID ssID, void* clientData);
#interface MuteChecker ()
#property (nonatomic,assign) SystemSoundID soundId;
#property (strong) MuteCheckCompletionHandler completionBlk;
#property (nonatomic, strong)NSDate *startTime;
-(void)completed;
#end
void MuteCheckCompletionProc(SystemSoundID ssID, void* clientData){
MuteChecker *obj = (__bridge MuteChecker *)clientData;
[obj completed];
}
#implementation MuteChecker
-(void)playMuteSound
{
self.startTime = [NSDate date];
AudioServicesPlaySystemSound(self.soundId);
}
-(void)completed
{
NSDate *now = [NSDate date];
NSTimeInterval t = [now timeIntervalSinceDate:self.startTime];
BOOL muted = (t > 0.1)? NO : YES;
self.completionBlk(t, muted);
}
-(void)check {
if (self.startTime == nil) {
[self playMuteSound];
} else {
NSDate *now = [NSDate date];
NSTimeInterval lastCheck = [now timeIntervalSinceDate:self.startTime];
if (lastCheck > 1) { //prevent checking interval shorter then the sound length
[self playMuteSound];
}
}
}
- (instancetype)initWithCompletionBlk:(MuteCheckCompletionHandler)completionBlk
{
self = [self init];
if (self) {
NSURL* url = [[NSBundle mainBundle] URLForResource:#"MuteChecker" withExtension:#"caf"];
if (AudioServicesCreateSystemSoundID((__bridge CFURLRef)url, &_soundId) == kAudioServicesNoError){
AudioServicesAddSystemSoundCompletion(self.soundId, CFRunLoopGetMain(), kCFRunLoopDefaultMode, MuteCheckCompletionProc,(__bridge void *)(self));
UInt32 yes = 1;
AudioServicesSetProperty(kAudioServicesPropertyIsUISound, sizeof(_soundId),&_soundId,sizeof(yes), &yes);
self.completionBlk = completionBlk;
} else {
NSLog(#"error setting up Sound ID");
}
}
return self;
}
- (void)dealloc
{
if (self.soundId != -1){
AudioServicesRemoveSystemSoundCompletion(self.soundId);
AudioServicesDisposeSystemSoundID(self.soundId);
}
}
#end
Important note: you will also have to provide a short audio MuteChecker.caf for the code to work. You could download one from his blog directly or generate one yourself.

uniqueidentifier is deprecated first in ios 5

i have this part of code in my app
(void) trackWithCategory:(NSString*)category withAction:(NSString*)action withValue:(float)value
{
AppController *ac = (AppController*) [UIApplication sharedApplication].delegate;
BOOL result = [ac.tracker trackEventWithCategory:category
withAction:action
withLabel:[UIDevice currentDevice].uniqueIdentifier
withValue:[NSNumber numberWithInt:(int)(value+0.5)]];
if (!result)
NSLog(#"Google Analytics track event failed");
}
and when i'm trying to build it gives me en error about this line:
withLabel:[UIDevice currentDevice].uniqueIdentifier
it rights,
uniqueidentifier is deprecated first in ios 5
please
how can i fix it ?
how can i write it differently so that it will be ok .. ?
Use CFUUID
Create and store using the NSUserdefaults
some sample
NSString *identifierString = [[NSUserDefaults standardUserDefaults] objectForKey:#"myID"];
if (!identifierString) {
CFUUIDRef identifier = CFUUIDCreate(NULL);
identifierString = (NSString*)CFUUIDCreateString(NULL, identifier);
[[NSUserDefaults standardUserDefaults] setObject:identifierString forKey:#"myID"];
}
NSLog(#"%#",identifierString);
/* ... */

siphon calling doesn't work - pjsip

So I have a compiled and running Siphon app but it just won't make the calls.
I get:
registration error - default error message.
Full error is this:
15:04:02.032 pjsua_call.c Making call with acc #0 to sip:6476805821#voip5-2.acanac.com
15:04:02.032 pjsua_call.c .Unable to make call because account is not valid: Invalid operation (PJ_EINVALIDOP) [status=70013]
15:04:05.580 call.m Error making call: Invalid operation (PJ_EINVALIDOP) [status=70013]
But when I use the same account on a different SIP app, it works perfectly fine.
When pjsip calls sip_dial_with_uri(_sip_acc_id, [url UTF8String], &call_id);
_sip_acc_id is 0 since I believe it's the 0th account that's in the settings for siphon.
url is the correct phone number I'm trying to dial but shows something like:
sip:62304892#url.com
and call id is just a reference so I dunno if it's important.
When I look at other voip apps, they have a registration process. Where you enter you username, password, and sip server domain or ip.
For Siphon, this is done in the settings file. However, if "register or login" is done in Siphon's code or not, I'm not sure.
Could that be the problem?
This is the code that tries to make an actual call:
/** FIXME plutôt à mettre dans l'objet qui gère les appels **/
-(void) dialup:(NSString *)phoneNumber number:(BOOL)isNumber
{
pjsua_call_id call_id;
pj_status_t status;
NSString *number;
UInt32 hasMicro, size;
// Verify if microphone is available (perhaps we should verify in another place ?)
size = sizeof(hasMicro);
AudioSessionGetProperty(kAudioSessionProperty_AudioInputAvailable,
&size, &hasMicro);
/*if (!hasMicro)
{
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:NSLocalizedString(#"No Microphone Available", #"SiphonApp")
message:NSLocalizedString(#"Connect a microphone to phone", #"SiphonApp")
delegate:nil
cancelButtonTitle:NSLocalizedString(#"OK", #"SiphonApp")
otherButtonTitles:nil];
[alert show];
[alert release];
return;
}*/
if (isNumber)
number = [self normalizePhoneNumber:phoneNumber];
else
number = phoneNumber;
if ([[NSUserDefaults standardUserDefaults] boolForKey:#"removeIntlPrefix"])
{
number = [number stringByReplacingOccurrencesOfString:#"+"
withString:#""
options:0
range:NSMakeRange(0,1)];
}
else
{
NSString *prefix = [[NSUserDefaults standardUserDefaults] stringForKey:
#"intlPrefix"];
if ([prefix length] > 0)
{
number = [number stringByReplacingOccurrencesOfString:#"+"
withString:prefix
options:0
range:NSMakeRange(0,1)];
}
}
// Manage pause symbol
NSArray * array = [number componentsSeparatedByString:#","];
[callViewController setDtmfCmd:#""];
if ([array count] > 1)
{
number = [array objectAtIndex:0];
[callViewController setDtmfCmd:[array objectAtIndex:1]];
}
if (!isConnected && [self wakeUpNetwork] == NO)
{
_phoneNumber = [[NSString stringWithString: number] retain];
if (isIpod)
{
UIAlertView *alertView = [[[UIAlertView alloc] initWithTitle:nil
message:NSLocalizedString(#"You must enable Wi-Fi or SIP account to place a call.",#"SiphonApp")
delegate:nil
cancelButtonTitle:NSLocalizedString(#"OK",#"SiphonApp")
otherButtonTitles:nil] autorelease];
[alertView show];
}
else
{
UIActionSheet *actionSheet = [[[UIActionSheet alloc] initWithTitle:NSLocalizedString(#"The SIP server is unreachable!",#"SiphonApp")
delegate:self
cancelButtonTitle:NSLocalizedString(#"Cancel",#"SiphonApp")
destructiveButtonTitle:nil
otherButtonTitles:NSLocalizedString(#"Cellular call",#"SiphonApp"),
nil] autorelease];
actionSheet.actionSheetStyle = UIActionSheetStyleDefault;
[actionSheet showInView: self.window];
}
return;
}
if ([self sipConnect])
{
NSRange range = [number rangeOfString:#"#"];
NSLog(#"%i", _sip_acc_id);
if (range.location != NSNotFound)
{
status = sip_dial_with_uri(_sip_acc_id, [[NSString stringWithFormat:#"sip:%#", number] UTF8String], &call_id);
}
else
status = sip_dial(_sip_acc_id, [number UTF8String], &call_id);
if (status != PJ_SUCCESS)
{
// FIXME
//[self displayStatus:status withTitle:nil];
const pj_str_t *str = pjsip_get_status_text(status);
NSString *msg = [[NSString alloc]
initWithBytes:str->ptr
length:str->slen
encoding:[NSString defaultCStringEncoding]];
[self displayError:msg withTitle:#"registration error"];
}
}
}
Also if anyone has a link to the Siphon app's code that's newer and maybe works better, I'd appreciate that as well.
More info:
in call.m file essentially this gets called:
status = pjsua_call_make_call(acc_id, &pj_uri, 0, NULL, NULL, call_id);
and here
acc_id = 0
pj_uri = char *-> "sip:6476805821#voip5-2.acanac.com"
pj_ssize_t -> 33
call_id = 803203976
I figured this out. Turns out, the siphon app wasn't registering the account.
The code below is important:
pj_status_t sip_connect(pj_pool_t *pool, pjsua_acc_id *acc_id)
{
// ID
acc_cfg.id.ptr = (char*) pj_pool_alloc(/*app_config.*/pool, PJSIP_MAX_URL_SIZE);
if (contactname && strlen(contactname))
acc_cfg.id.slen = pj_ansi_snprintf(acc_cfg.id.ptr, PJSIP_MAX_URL_SIZE,
"\"%s\"<sip:%s#%s>", contactname, uname, server);
else
acc_cfg.id.slen = pj_ansi_snprintf(acc_cfg.id.ptr, PJSIP_MAX_URL_SIZE,
"sip:%s#%s", uname, server);
if ((status = pjsua_verify_sip_url(acc_cfg.id.ptr)) != 0)
{
PJ_LOG(1,(THIS_FILE, "Error: invalid SIP URL '%s' in local id argument",
acc_cfg.id));
[app displayParameterError: #"Invalid value for username or server."];
return status;
}
// Registrar
acc_cfg.reg_uri.ptr = (char*) pj_pool_alloc(/*app_config.*/pool,
PJSIP_MAX_URL_SIZE);
acc_cfg.reg_uri.slen = pj_ansi_snprintf(acc_cfg.reg_uri.ptr,
PJSIP_MAX_URL_SIZE, "sip:%s", server);
if ((status = pjsua_verify_sip_url(acc_cfg.reg_uri.ptr)) != 0)
{
PJ_LOG(1,(THIS_FILE, "Error: invalid SIP URL '%s' in registrar argument",
acc_cfg.reg_uri));
[app displayParameterError: #"Invalid value for server parameter."];
return status;
}
...
more code here
...
}
This is where your account gets registered to a SIP server.
Make sure the sip_connect function gets called from the main application itself shown below:
/* */
- (BOOL)sipConnect
{
pj_status_t status;
if (![self sipStartup])
return FALSE;
//if ([self wakeUpNetwork] == NO)
// return NO;
NSLog(#"%i", _sip_acc_id);
//if (_sip_acc_id == PJSUA_INVALID_ID)
//{
self.networkActivityIndicatorVisible = YES;
if ((status = sip_connect(_app_config.pool, &_sip_acc_id)) != PJ_SUCCESS)
{
self.networkActivityIndicatorVisible = NO;
return FALSE;
}
//}
return TRUE;
}
in my case _sip_acc_id wasn't equal to PJSUA_INVALID_ID therefore sip_connect was never getting called.
Thanks for all of those who tried to solve it in their head? :)
You are unlikely to get any useful help unless you post a code snippet as well as error output (at minimum). More context, such as configuration info and relevant aspects of your network, will further improve your chances.
(I would have added this as a comment on the question, but don't yet have the required reputation.)

Resources