Private iOS Framework Returning NULL - ios

I'm trying to use BatteryCenter and CommonUtilities private frameworks under iOS 9.1 with the help of nst's iOS Runtime Headers. It's for research purposes and won't make it to the AppStore.
Here are their respective codes:
- (void)batteryCenter {
NSBundle *bundle = [NSBundle bundleWithPath:#"/System/Library/PrivateFrameworks/BatteryCenter.framework"];
BOOL success = [bundle load];
if(success) {
Class BCBatteryDevice = NSClassFromString(#"BCBatteryDevice");
id si = [[BCBatteryDevice alloc] init];
NSLog(#"Charging: %#", [si valueForKey:#"charging"]);
}
}
- (void)commonUtilities {
NSBundle *bundle = [NSBundle bundleWithPath:#"/System/Library/PrivateFrameworks/CommonUtilities.framework"];
BOOL success = [bundle load];
if(success) {
Class CommonUtilities = NSClassFromString(#"CUTWiFiManager");
id si = [CommonUtilities valueForKey:#"sharedInstance"];
NSLog(#"Is Wi-Fi Enabled: %#", [si valueForKey:#"isWiFiEnabled"]);
NSLog(#"Wi-Fi Scaled RSSI: %#", [si valueForKey:#"wiFiScaledRSSI"]);
NSLog(#"Wi-Fi Scaled RSSI: %#", [si valueForKey:#"lastWiFiPowerInfo"]);
}
}
Although I get the classes back, all of their respected values are NULL which is weird since some must be true, e.g. I'm connected to Wi-Fi so isWiFiEnabled should be YES.
What exactly is missing that my code doesn't return whats expected? Does it need entitlement(s)? If so what exactly?

In Swift, I managed to get this working without the BatteryCenter headers. I'm still looking for a way to access the list of attached batteries without using BCBatteryDeviceController, but this is what I have working so far:
Swift 3:
guard case let batteryCenterHandle = dlopen("/System/Library/PrivateFrameworks/BatteryCenter.framework/BatteryCenter", RTLD_LAZY), batteryCenterHandle != nil else {
fatalError("BatteryCenter not found")
}
guard let batteryDeviceControllerClass = NSClassFromString("BCBatteryDeviceController") as? NSObjectProtocol else {
fatalError("BCBatteryDeviceController not found")
}
let instance = batteryDeviceControllerClass.perform(Selector(("sharedInstance"))).takeUnretainedValue()
if let devices = instance.value(forKey: "connectedDevices") as? [AnyObject] {
// You will have more than one battery in connectedDevices if your device is using a Smart Case
for battery in devices {
print(battery)
}
}
Swift 2.2:
guard case let batteryCenterHandle = dlopen("/System/Library/PrivateFrameworks/BatteryCenter.framework/BatteryCenter", RTLD_LAZY) where batteryCenterHandle != nil else {
fatalError("BatteryCenter not found")
}
guard let c = NSClassFromString("BCBatteryDeviceController") as? NSObjectProtocol else {
fatalError("BCBatteryDeviceController not found")
}
let instance = c.performSelector("sharedInstance").takeUnretainedValue()
if let devices = instance.valueForKey("connectedDevices") as? [AnyObject] {
// You will have more than one battery in connectedDevices if your device is using a Smart Case
for battery in devices {
print(battery)
}
}
This logs:
<BCBatteryDevice: 0x15764a3d0; vendor = Apple; productIdentifier = 0; parts = (null); matchIdentifier = (null); baseIdentifier = InternalBattery-0; name = iPhone; percentCharge = 63; lowBattery = NO; connected = YES; charging = YES; internal = YES; powerSource = YES; poweredSoureState = AC Power; transportType = 1 >

You need to first access the BCBatteryDeviceController, after success block is executed, through which you can get list of all the connected devices.
Here is the code for the same.
Class CommonUtilities = NSClassFromString(#"BCBatteryDeviceController");
id si = [CommonUtilities valueForKey:#"sharedInstance"];
BCBatteryDeviceController* objBCBatteryDeviceController = si;
NSLog(#"Connected devices: %#", objBCBatteryDeviceController.connectedDevices);

Related

iOS SWIFT - WebRTC change from Front Camera to back Camera

WebRTC video by default uses Front Camera, which works fine. However, i need to switch it to back camera, and i have not been able to find any code to do that.
Which part do i need to edit?
Is it the localView or localVideoTrack or capturer?
Swift 3.0
Peer connection can have only one 'RTCVideoTrack' for sending video stream.
At first, for change camera front/back you must remove current video track on peer connection.
After then, you create new 'RTCVideoTrack' on camera which you need, and set this for peer connection.
I used this methods.
func swapCameraToFront() {
let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
let localVideoTrack: RTCVideoTrack? = createLocalVideoTrack()
if localVideoTrack != nil {
localStream?.addVideoTrack(localVideoTrack)
delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
}
peerConnection?.remove(localStream)
peerConnection?.add(localStream)
}
func swapCameraToBack() {
let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
let localVideoTrack: RTCVideoTrack? = createLocalVideoTrackBackCamera()
if localVideoTrack != nil {
localStream?.addVideoTrack(localVideoTrack)
delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
}
peerConnection?.remove(localStream)
peerConnection?.add(localStream)
}
As of now I only have the answer in Objective C language in regard to Ankit's comment below. I will convert it into Swift after some time.
You can check the below code
- (RTCVideoTrack *)createLocalVideoTrack {
RTCVideoTrack *localVideoTrack = nil;
NSString *cameraID = nil;
for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if (captureDevice.position == AVCaptureDevicePositionFront) {
cameraID = [captureDevice localizedName]; break;
}
}
RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID];
RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints];
RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints];
localVideoTrack = [_factory videoTrackWithID:#"ARDAMSv0" source:videoSource];
return localVideoTrack;
}
- (RTCVideoTrack *)createLocalVideoTrackBackCamera {
RTCVideoTrack *localVideoTrack = nil;
//AVCaptureDevicePositionFront
NSString *cameraID = nil;
for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if (captureDevice.position == AVCaptureDevicePositionBack) {
cameraID = [captureDevice localizedName];
break;
}
}
RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID];
RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints];
RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints];
localVideoTrack = [_factory videoTrackWithID:#"ARDAMSv0" source:videoSource];
return localVideoTrack;
}
If you decide to use official Google build here the explanation:
First, you must configure your camera before call start, best place to do that in ARDVideoCallViewDelegate in method didCreateLocalCapturer
- (void)startCapture:(void (^)(BOOL succeeded))completionHandler {
AVCaptureDevicePosition position = _usingFrontCamera ? AVCaptureDevicePositionFront : AVCaptureDevicePositionBack;
__weak AVCaptureDevice *device = [self findDeviceForPosition:position];
if ([device lockForConfiguration:nil]) {
if ([device isFocusPointOfInterestSupported]) {
[device setFocusModeLockedWithLensPosition:0.9 completionHandler: nil];
}
}
AVCaptureDeviceFormat *format = [self selectFormatForDevice:device];
if (format == nil) {
RTCLogError(#"No valid formats for device %#", device);
NSAssert(NO, #"");
return;
}
NSInteger fps = [self selectFpsForFormat:format];
[_capturer startCaptureWithDevice: device
format: format
fps:fps completionHandler:^(NSError * error) {
NSLog(#"%#",error);
if (error == nil) {
completionHandler(true);
}
}];
}
Don't forget enabling capture device is asynchronous, sometime better to use completion to be sure everything done as expected.
I am not sure which chrome version you are using for webrtc but with v54 and above there is "bool" property called "useBackCamera" in RTCAVFoundationVideoSource class. You can make use of this property to switch between front/back camera.
Swift 4.0 & 'GoogleWebRTC' : '1.1.20913'
RTCAVFoundationVideoSource class has a property named useBackCamera that can be used for switching the camera used.
#interface RTCAVFoundationVideoSource : RTCVideoSource
- (instancetype)init NS_UNAVAILABLE;
/**
* Calling this function will cause frames to be scaled down to the
* requested resolution. Also, frames will be cropped to match the
* requested aspect ratio, and frames will be dropped to match the
* requested fps. The requested aspect ratio is orientation agnostic and
* will be adjusted to maintain the input orientation, so it doesn't
* matter if e.g. 1280x720 or 720x1280 is requested.
*/
- (void)adaptOutputFormatToWidth:(int)width height:(int)height fps:(int)fps;
/** Returns whether rear-facing camera is available for use. */
#property(nonatomic, readonly) BOOL canUseBackCamera;
/** Switches the camera being used (either front or back). */
#property(nonatomic, assign) BOOL useBackCamera;
/** Returns the active capture session. */
#property(nonatomic, readonly) AVCaptureSession *captureSession;
Below is the implementation for switching camera.
var useBackCamera: Bool = false
func switchCamera() {
useBackCamera = !useBackCamera
self.switchCamera(useBackCamera: useBackCamera)
}
private func switchCamera(useBackCamera: Bool) -> Void {
let localStream = peerConnection?.localStreams.first
if let videoTrack = localStream?.videoTracks.first {
localStream?.removeVideoTrack(videoTrack)
}
let localVideoTrack = createLocalVideoTrack(useBackCamera: useBackCamera)
localStream?.addVideoTrack(localVideoTrack)
self.delegate?.webRTCClientDidAddLocal(videoTrack: localVideoTrack)
if let ls = localStream {
peerConnection?.remove(ls)
peerConnection?.add(ls)
}
}
func createLocalVideoTrack(useBackCamera: Bool) -> RTCVideoTrack {
let videoSource = self.factory.avFoundationVideoSource(with: self.constraints)
videoSource.useBackCamera = useBackCamera
let videoTrack = self.factory.videoTrack(with: videoSource, trackId: "video")
return videoTrack
}
In the current version of WebRTC, RTCAVFoundationVideoSource has been deprecated and replaced with a
generic RTCVideoSource combined with an RTCVideoCapturer implementation.
In order to switch the camera I'm doing this:
- (void)switchCameraToPosition:(AVCaptureDevicePosition)position completionHandler:(void (^)(void))completionHandler {
if (self.cameraPosition != position) {
RTCMediaStream *localStream = self.peerConnection.localStreams.firstObject;
[localStream removeVideoTrack:self.localVideoTrack];
//[self.peerConnection removeStream:localStream];
self.localVideoTrack = [self createVideoTrack];
[self startCaptureLocalVideoWithPosition:position completionHandler:^{
[localStream addVideoTrack:self.localVideoTrack];
//[self.peerConnection addStream:localStream];
if (completionHandler) {
completionHandler();
}
}];
self.cameraPosition = position;
}
}
Take a look at the commented lines, If you start adding/removing the stream from the peer connection it will cause a delay in the video connection.
I'm using GoogleWebRTC-1.1.25102

How to check either secure enclave is available in device or not

As we know that secure Enclave is a coprocessor fabricated in the Apple A7 and its available in A7 and later on but its use publicly in iOS 9 kSecAttrTokenIDSecureEnclave but how do we check either some device support secure enclave or not ?
Thanks
I didnt find any so I made my own check:
+ (BOOL) isDeviceOkForSecureEnclave
{
double OSVersionNumber = floor(NSFoundationVersionNumber);
UIUserInterfaceIdiom deviceType = [[UIDevice currentDevice] userInterfaceIdiom];
BOOL isOSForSecureEnclave = OSVersionNumber > NSFoundationVersionNumber_iOS_8_4 ? YES:NO;
//iOS 9 and up are ready for SE
BOOL isDeviceModelForSecureEnclave = NO;
switch (deviceType) {
case UIUserInterfaceIdiomPhone:
//iPhone
isDeviceModelForSecureEnclave = [self isPhoneForSE];
break;
case UIUserInterfaceIdiomPad:
//iPad
isDeviceModelForSecureEnclave = [self isPadForSE];
break;
default:
isDeviceModelForSecureEnclave = false;
break;
}
return (isOSForSecureEnclave && isDeviceModelForSecureEnclave) ? YES:NO;
}
/**
The arrays are models that we know not having SE in hardware, so if the current device is on the list it means it dosent have SE
*/
+ (BOOL) isPhoneForSE
{
NSString *thisPlatform = [self platform];
NSArray * oldModels = [NSArray arrayWithObjects:
#"x86_64",
#"iPhone1,1",
#"iPhone1,2",
#"iPhone2,1",
#"iPhone3,1",
#"iPhone3,3",
#"iPhone4,1",
#"iPhone5,1",
#"iPhone5,2",
#"iPhone5,3",
#"iPhone5,4", nil];
BOOL isInList = [oldModels containsObject: thisPlatform];
return !isInList;
}
+ (BOOL) isPadForSE
{
//iPad Mini 2 is the earliest with SE // "iPad4,4"
NSString *thisPlatform = [self platform];
NSArray * oldModels = [NSArray arrayWithObjects:
#"x86_64",
#"#iPad",
#"#iPad1,0",
#"#iPad1,1",
#"iPad2,1",
#"iPad2,2",
#"iPad2,3",
#"iPad2,4",
#"iPad2,5",
#"iPad2,6",
#"iPad2,7",
#"iPad3,1",
#"iPad3,2",
#"iPad3,3",
#"iPad3,4",
#"iPad3,5",
#"iPad3,6",nil];
BOOL isInList = [oldModels containsObject: thisPlatform];
return !isInList;
}
+ (NSString *)platform
{
size_t size;
sysctlbyname("hw.machine", NULL, &size, NULL, 0);
char *machine = malloc(size);
sysctlbyname("hw.machine", machine, &size, NULL, 0);
NSString *platform = [NSString stringWithUTF8String:machine];
free(machine);
return platform;
}
#end
TO Check Touch ID
- (BOOL)canAuthenticateByTouchId {
if ([LAContext class]) {
return [[[LAContext alloc] init] canEvaluatePolicy:LAPolicyDeviceOwnerAuthenticationWithBiometrics error:nil];
}
return YES;
}
You can also find for detecting Secure Enclave here you find
Above solution has no problem but it seems like hack, so I am adding another solution in Swift 4.
To check Secure Enclave availability
enum Device {
//To check that device has secure enclave or not
public static var hasSecureEnclave: Bool {
return !isSimulator && hasBiometrics
}
//To Check that this is this simulator
public static var isSimulator: Bool {
return TARGET_OS_SIMULATOR == 1
}
//Check that this device has Biometrics features available
private static var hasBiometrics: Bool {
//Local Authentication Context
let localAuthContext = LAContext()
var error: NSError?
/// Policies can have certain requirements which, when not satisfied, would always cause
/// the policy evaluation to fail - e.g. a passcode set, a fingerprint
/// enrolled with Touch ID or a face set up with Face ID. This method allows easy checking
/// for such conditions.
var isValidPolicy = localAuthContext.canEvaluatePolicy(.deviceOwnerAuthenticationWithBiometrics, error: &error)
guard isValidPolicy == true else {
if #available(iOS 11, *) {
if error!.code != LAError.biometryNotAvailable.rawValue {
isValidPolicy = true
} else{
isValidPolicy = false
}
}
else {
if error!.code != LAError.touchIDNotAvailable.rawValue {
isValidPolicy = true
}else{
isValidPolicy = false
}
}
return isValidPolicy
}
return isValidPolicy
}
}
To check that touch id available or not
let hasTouchID = LAContext().canEvaluatePolicy(.deviceOwnerAuthenticationWithBiometrics, error: &error)
if(hasTouchID || (error?.code != LAError.touchIDNotAvailable.rawValue)) {
print("Touch Id Available in device")
}
If you want solution in Objective C, then refer this link.
Solution in Objective C.

Impelementation of RTCDataChannel of WebRTC in iOS

I am using ISBX/apprtc-ios code for video chat implementation. This work perfect in iPhone and simulator. I want to send text/string data between two peers and I am using RTCDataChannel class.
Following is my implementation and I am not able to establish the connection. It always give the status kRTCDataChannelStateConnecting How can I get the RTCDataChannel connected? Is there any working implementation available for WebRTC RTCDataChannel for iOS?
- (void)createNewDataChannel {
if (self.clientDataChannel) {
switch(self.clientDataChannel.state) {
case kRTCDataChannelStateConnecting:
NSLog(#"kRTCDataChannelStateConnecting");
break;
case kRTCDataChannelStateOpen:
NSLog(#"kRTCDataChannelStateOpen");
break;
case kRTCDataChannelStateClosing:
NSLog(#"kRTCDataChannelStateClosing");
break;
case kRTCDataChannelStateClosed:
NSLog(#"kRTCDataChannelStateClosed");
break;
default:
NSLog(#"Unknown");
}
return;
}
if (self.peerConnection == nil) {
NSLog(#"Peerconnection is nil");
}
RTCDataChannelInit *DataChannelInit = [[RTCDataChannelInit alloc] init];
DataChannelInit.maxRetransmits = 0;
DataChannelInit.isOrdered=false;
DataChannelInit.maxRetransmitTimeMs = -1;
DataChannelInit.isNegotiated = false;
DataChannelInit.streamId = 25;
RTCDataChannel *dataChannel =[_peerConnection createDataChannelWithLabel:#"commands" config:DataChannelInit];
dataChannel.delegate=self;
self.clientDataChannel = dataChannel;
if (self.clientDataChannel == nil) {
NSLog(#"Datachannel is nil");
}
else {
NSLog(#"Datachannel is working");
}
}
I am able to send data through RTCDataChannel. What I did is before sending the offer. I created the RTCDataChannelInit with the below configuration.
RTCDataChannelInit *datainit = [[RTCDataChannelInit alloc] init];
datainit.isNegotiated = YES;
datainit.isOrdered = YES;
datainit.maxRetransmits = 30;
datainit.maxRetransmitTimeMs = 30000;
datainit.streamId = 1;
self.dataChannel = [_peerConnection createDataChannelWithLabel:#"commands" config:datainit];
self.dataChannel.delegate=self;
Once both the devices get connected, I checked the state in the delegate function. The state of the channel is open.
- (void)channelDidChangeState:(RTCDataChannel*)channel
{
NSLog(#"channel.state %u",channel.state);
}
Then I send the data as per the below code:
RTCDataBuffer *buffer = [[RTCDataBuffer alloc] initWithData:[str dataUsingEncoding:NSUTF8StringEncoding] isBinary:NO];
BOOL x = [self.dataChannel sendData:buffer];
The configuration I used was given here:
https://groups.google.com/forum/#!searchin/discuss-webrtc/RTCDataChannel/discuss-webrtc/9NObqxnItCg/mRvXBIwkA7wJ

Copying Swift arrays from background to foreground

If we go from Swift background to foreground, what is the proper way to [nsObject copy] in Swift?
For example in Objective-C, we would loop through a long array of ALAssets (say like 10,000+) in the background by doing:
[alGroup enumerateAssetsUsingBlock:^(ALAsset *alPhoto, NSUInteger index, BOOL *stop)
{
// Here to make changes for speed up image loading from device library...
// =====================================================
// >>>>>>>>>>>>>>>>>>> IN BACKGROUND <<<<<<<<<<<<<<<<<<<
// =====================================================
if(alPhoto == nil)
{
c(#"number of assets to display: %d", (int)bgAssetMedia.count);
// c(#"All device library photos uploaded into memory...%#", bgAssetMedia);
dispatch_async(dispatch_get_main_queue(), ^(void)
{
// =====================================================
// >>>>>>>>>>>>>>>>>>> IN FOREGROUND <<<<<<<<<<<<<<<<<<<
// =====================================================
[ui hideSpinner];
if (_bReverse)
// Here we copying all the photos from device library into array (_assetPhotos)...
_assetPhotos = [[NSMutableArray alloc] initWithArray:[[[bgAssetMedia copy] reverseObjectEnumerator] allObjects]];
else
_assetPhotos = [[NSMutableArray alloc] initWithArray:[bgAssetMedia copy]];
// NSLog(#"%lu",(unsigned long)_assetPhotos.count);
if (_assetPhotos.count > 0)
{
result(_assetPhotos);
}
});
} else {
// if we have a Custom album, lets remove all shared videos from the Camera Roll
if (![self isPhotoInCustomAlbum:alPhoto])
{
// for some reason, shared glancy videos still show with 00:00 minutes and seconds, so remove them now
BOOL isVideo = [[alPhoto valueForProperty:ALAssetPropertyType] isEqual:ALAssetTypeVideo];
int duration = 0;
int minutes = 0;
int seconds = 0;
// NSString *bgVideoLabel = nil;
if (isVideo)
{
NSString *strduration = [alPhoto valueForProperty:ALAssetPropertyDuration];
duration = [strduration intValue];
minutes = duration / 60;
seconds = duration % 60;
// bgVideoLabel = [NSString stringWithFormat:#"%d:%02d", minutes, seconds];
if (minutes > 0 || seconds > 0)
{
[bgAssetMedia addObject:alPhoto];
}
} else {
[bgAssetMedia addObject:alPhoto];
}
}
}
// NSLog(#"%lu",(unsigned long)bgAssetMedia.count);
}];
Then, we would switch to the foreground to update the UIViewController, which are these lines in the above snippet:
_assetPhotos = [[NSMutableArray alloc] initWithArray:[bgAssetMedia copy]];
The "copy" function was the black magic that allowed us to quickly marshal the memory from background to foreground without having to loop through array again.
Is there a similar method in Swift? Perhaps something like this:
_assetPhotos = NSMutableArray(array: bgAssetMedia.copy())
Is Swift thread safe now for passing memory pointers from background to foreground? What's the new protocol? Thank you-
I found the answer. After running large queries on the Realm and CoreData database contexts. I found it easy to just make a basic copy of the memory pointer and downcast it to match the class.
let mediaIdFG = mediaId.copy() as! String
Full example in context below:
static func createOrUpdate(dictionary:NSDictionary) -> Promise<Media> {
// Query and update from any thread
return Promise { fulfill, reject in
executeInBackground {
// c("BG media \(dictionary)")
let realm:RLMRealm = RLMRealm.defaultRealm()
realm.beginWriteTransaction()
let media = Media.createOrUpdateInRealm(realm, withJSONDictionary:dictionary as [NSObject : AnyObject])
// media.type = type
c("BG media \(media)")
let mediaId = media.localIdentifier
do {
try realm.commitWriteTransaction()
executeInForeground({
let mediaIdFG = mediaId.copy() as! String
let newMedia = Media.findOneByLocalIdentifier(mediaIdFG)
c("FG \(mediaIdFG) newMedia \(newMedia)")
fulfill(newMedia)
})
} catch {
reject( Constants.createError("Realm Something went wrong!") )
}
}
} // return promise
} // func createOrUpdate
Posting my own answer to let you know my findings. I also found this helpful article about Swift's copy() aka objc's copyWithZone: https://www.hackingwithswift.com/example-code/system/how-to-copy-objects-in-swift-using-copy

iPhone get SSID without private library

I have a commercial app that has a completely legitimate reason to see the SSID of the network it is connected to: If it is connected to a Adhoc network for a 3rd party hardware device it needs to be functioning in a different manner than if it is connected to the internet.
Everything I've seen about getting the SSID tells me I have to use Apple80211, which I understand is a private library. I also read that if I use a private library Apple will not approve the app.
Am I stuck between an Apple and a hard place, or is there something I'm missing here?
As of iOS 7 or 8, you can do this (need Entitlement for iOS 12+ as shown below):
#import SystemConfiguration.CaptiveNetwork;
/** Returns first non-empty SSID network info dictionary.
* #see CNCopyCurrentNetworkInfo */
- (NSDictionary *)fetchSSIDInfo {
NSArray *interfaceNames = CFBridgingRelease(CNCopySupportedInterfaces());
NSLog(#"%s: Supported interfaces: %#", __func__, interfaceNames);
NSDictionary *SSIDInfo;
for (NSString *interfaceName in interfaceNames) {
SSIDInfo = CFBridgingRelease(
CNCopyCurrentNetworkInfo((__bridge CFStringRef)interfaceName));
NSLog(#"%s: %# => %#", __func__, interfaceName, SSIDInfo);
BOOL isNotEmpty = (SSIDInfo.count > 0);
if (isNotEmpty) {
break;
}
}
return SSIDInfo;
}
Example output:
2011-03-04 15:32:00.669 ShowSSID[4857:307] -[ShowSSIDAppDelegate fetchSSIDInfo]: Supported interfaces: (
en0
)
2011-03-04 15:32:00.693 ShowSSID[4857:307] -[ShowSSIDAppDelegate fetchSSIDInfo]: en0 => {
BSSID = "ca:fe:ca:fe:ca:fe";
SSID = XXXX;
SSIDDATA = <01234567 01234567 01234567>;
}
Note that no ifs are supported on the simulator. Test on your device.
iOS 12
You must enable access wifi info from capabilities.
Important
To use this function in iOS 12 and later, enable the Access WiFi Information capability for your app in Xcode. When you enable this capability, Xcode automatically adds the Access WiFi Information entitlement to your entitlements file and App ID. Documentation link
Swift 4.2
func getConnectedWifiInfo() -> [AnyHashable: Any]? {
if let ifs = CFBridgingRetain( CNCopySupportedInterfaces()) as? [String],
let ifName = ifs.first as CFString?,
let info = CFBridgingRetain( CNCopyCurrentNetworkInfo((ifName))) as? [AnyHashable: Any] {
return info
}
return nil
}
UPDATE FOR iOS 10 and up
CNCopySupportedInterfaces is no longer deprecated in iOS 10. (API Reference)
You need to import SystemConfiguration/CaptiveNetwork.h and add SystemConfiguration.framework to your target's Linked Libraries (under build phases).
Here is a code snippet in swift (RikiRiocma's Answer):
import Foundation
import SystemConfiguration.CaptiveNetwork
public class SSID {
class func fetchSSIDInfo() -> String {
var currentSSID = ""
if let interfaces = CNCopySupportedInterfaces() {
for i in 0..<CFArrayGetCount(interfaces) {
let interfaceName: UnsafePointer<Void> = CFArrayGetValueAtIndex(interfaces, i)
let rec = unsafeBitCast(interfaceName, AnyObject.self)
let unsafeInterfaceData = CNCopyCurrentNetworkInfo("\(rec)")
if unsafeInterfaceData != nil {
let interfaceData = unsafeInterfaceData! as Dictionary!
currentSSID = interfaceData["SSID"] as! String
}
}
}
return currentSSID
}
}
(Important: CNCopySupportedInterfaces returns nil on simulator.)
For Objective-c, see Esad's answer here and below
+ (NSString *)GetCurrentWifiHotSpotName {
NSString *wifiName = nil;
NSArray *ifs = (__bridge_transfer id)CNCopySupportedInterfaces();
for (NSString *ifnam in ifs) {
NSDictionary *info = (__bridge_transfer id)CNCopyCurrentNetworkInfo((__bridge CFStringRef)ifnam);
if (info[#"SSID"]) {
wifiName = info[#"SSID"];
}
}
return wifiName;
}
UPDATE FOR iOS 9
As of iOS 9 Captive Network is deprecated*. (source)
*No longer deprecated in iOS 10, see above.
It's recommended you use NEHotspotHelper (source)
You will need to email apple at networkextension#apple.com and request entitlements. (source)
Sample Code (Not my code. See Pablo A's answer):
for(NEHotspotNetwork *hotspotNetwork in [NEHotspotHelper supportedNetworkInterfaces]) {
NSString *ssid = hotspotNetwork.SSID;
NSString *bssid = hotspotNetwork.BSSID;
BOOL secure = hotspotNetwork.secure;
BOOL autoJoined = hotspotNetwork.autoJoined;
double signalStrength = hotspotNetwork.signalStrength;
}
Side note: Yup, they deprecated CNCopySupportedInterfaces in iOS 9 and reversed their position in iOS 10. I spoke with an Apple networking engineer and the reversal came after so many people filed Radars and spoke out about the issue on the Apple Developer forums.
Here's the cleaned up ARC version, based on #elsurudo's code:
- (id)fetchSSIDInfo {
NSArray *ifs = (__bridge_transfer NSArray *)CNCopySupportedInterfaces();
NSLog(#"Supported interfaces: %#", ifs);
NSDictionary *info;
for (NSString *ifnam in ifs) {
info = (__bridge_transfer NSDictionary *)CNCopyCurrentNetworkInfo((__bridge CFStringRef)ifnam);
NSLog(#"%# => %#", ifnam, info);
if (info && [info count]) { break; }
}
return info;
}
This works for me on the device (not simulator). Make sure you add the systemconfiguration framework.
#import <SystemConfiguration/CaptiveNetwork.h>
+ (NSString *)currentWifiSSID {
// Does not work on the simulator.
NSString *ssid = nil;
NSArray *ifs = (__bridge_transfer id)CNCopySupportedInterfaces();
for (NSString *ifnam in ifs) {
NSDictionary *info = (__bridge_transfer id)CNCopyCurrentNetworkInfo((__bridge CFStringRef)ifnam);
if (info[#"SSID"]) {
ssid = info[#"SSID"];
}
}
return ssid;
}
This code work well in order to get SSID.
#import <SystemConfiguration/CaptiveNetwork.h>
#implementation IODAppDelegate
#synthesize window = _window;
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
CFArrayRef myArray = CNCopySupportedInterfaces();
CFDictionaryRef myDict = CNCopyCurrentNetworkInfo(CFArrayGetValueAtIndex(myArray, 0));
NSLog(#"Connected at:%#",myDict);
NSDictionary *myDictionary = (__bridge_transfer NSDictionary*)myDict;
NSString * BSSID = [myDictionary objectForKey:#"BSSID"];
NSLog(#"bssid is %#",BSSID);
// Override point for customization after application launch.
return YES;
}
And this is the results :
Connected at:{
BSSID = 0;
SSID = "Eqra'aOrange";
SSIDDATA = <45717261 27614f72 616e6765>;
}
If you are running iOS 12 you will need to do an extra step.
I've been struggling to make this code work and finally found this on Apple's site:
"Important
To use this function in iOS 12 and later, enable the Access WiFi Information capability for your app in Xcode. When you enable this capability, Xcode automatically adds the Access WiFi Information entitlement to your entitlements file and App ID."
https://developer.apple.com/documentation/systemconfiguration/1614126-cncopycurrentnetworkinfo
See CNCopyCurrentNetworkInfo in CaptiveNetwork: http://developer.apple.com/library/ios/#documentation/SystemConfiguration/Reference/CaptiveNetworkRef/Reference/reference.html.
Here's the short & sweet Swift version.
Remember to link and import the Framework:
import UIKit
import SystemConfiguration.CaptiveNetwork
Define the method:
func fetchSSIDInfo() -> CFDictionary? {
if let
ifs = CNCopySupportedInterfaces().takeUnretainedValue() as? [String],
ifName = ifs.first,
info = CNCopyCurrentNetworkInfo((ifName as CFStringRef))
{
return info.takeUnretainedValue()
}
return nil
}
Call the method when you need it:
if let
ssidInfo = fetchSSIDInfo() as? [String:AnyObject],
ssID = ssidInfo["SSID"] as? String
{
println("SSID: \(ssID)")
} else {
println("SSID not found")
}
As mentioned elsewhere, this only works on your iDevice. When not on WiFi, the method will return nil – hence the optional.
For iOS 13
As from iOS 13 your app also needs Core Location access in order to use the CNCopyCurrentNetworkInfo function unless it configured the current network or has VPN configurations:
So this is what you need (see apple documentation):
- Link the CoreLocation.framework library
- Add location-services as a UIRequiredDeviceCapabilities Key/Value in Info.plist
- Add a NSLocationWhenInUseUsageDescription Key/Value in Info.plist describing why your app requires Core Location
- Add the "Access WiFi Information" entitlement for your app
Now as an Objective-C example, first check if location access has been accepted before reading the network info using CNCopyCurrentNetworkInfo:
- (void)fetchSSIDInfo {
NSString *ssid = NSLocalizedString(#"not_found", nil);
if (#available(iOS 13.0, *)) {
if ([CLLocationManager authorizationStatus] == kCLAuthorizationStatusDenied) {
NSLog(#"User has explicitly denied authorization for this application, or location services are disabled in Settings.");
} else {
CLLocationManager* cllocation = [[CLLocationManager alloc] init];
if(![CLLocationManager locationServicesEnabled] || [CLLocationManager authorizationStatus] == kCLAuthorizationStatusNotDetermined){
[cllocation requestWhenInUseAuthorization];
usleep(500);
return [self fetchSSIDInfo];
}
}
}
NSArray *ifs = (__bridge_transfer id)CNCopySupportedInterfaces();
id info = nil;
for (NSString *ifnam in ifs) {
info = (__bridge_transfer id)CNCopyCurrentNetworkInfo(
(__bridge CFStringRef)ifnam);
NSDictionary *infoDict = (NSDictionary *)info;
for (NSString *key in infoDict.allKeys) {
if ([key isEqualToString:#"SSID"]) {
ssid = [infoDict objectForKey:key];
}
}
}
...
...
}

Resources