Audio Output Routes for AirPlay - ios

I have looked but can't find a way to access the Audio Output Routes so i can detect if the audio is coming out via AirPlay.
This is what i found in the Documentation for iOS 5.0
kAudioSessionOutputRoute_AirPlay
Discussion
These strings are used as values for the kAudioSession_AudioRouteKey_Type key for the dictionary associated with the kAudioSession_AudioRouteKey_Outputs array.
I can't find a way to get access to the kAudioSession_AudioRouteKey_Outputs array.
Thanks

Even if Bassem seems to have found a solution, for completion's sake, here's how to detect whether the current output route is AirPlay or not:
- (BOOL)isAirPlayActive{
CFDictionaryRef currentRouteDescriptionDictionary = nil;
UInt32 dataSize = sizeof(currentRouteDescriptionDictionary);
AudioSessionGetProperty(kAudioSessionProperty_AudioRouteDescription, &dataSize, &currentRouteDescriptionDictionary);
if (currentRouteDescriptionDictionary) {
CFArrayRef outputs = CFDictionaryGetValue(currentRouteDescriptionDictionary, kAudioSession_AudioRouteKey_Outputs);
if (outputs) {
if(CFArrayGetCount(outputs) > 0) {
CFDictionaryRef currentOutput = CFArrayGetValueAtIndex(outputs, 0);
CFStringRef outputType = CFDictionaryGetValue(currentOutput, kAudioSession_AudioRouteKey_Type);
return (CFStringCompare(outputType, kAudioSessionOutputRoute_AirPlay, 0) == kCFCompareEqualTo);
}
}
}
return NO;
}
Keep in mind that you have to #import <AudioToolbox/AudioToolbox.h> and link against the AudioToolbox framework.

Since iOS 6, the recommended approach for this would be using AVAudioSession (the C-based AudioSession API is deprecated as of iOS 7).
let currentRoute = AVAudioSession.sharedInstance().currentRoute
currentRoute returns an AVAudioSessionRouteDescription, a very simple class with two properties: inputs and outputs. Each of these is an optional array of AVAudioSessionPortDescriptions, which provides the information we need about the current route:
if let outputs = currentRoute?.outputs as? [AVAudioSessionPortDescription] {
// Usually, there will be just one output port (or none), but let's play it safe...
if let airplayOutputs = outputs.filter { $0.portType == AVAudioSessionPortAirPlay } where !airplayOutputs.isEmpty {
// Connected to airplay output...
} else {
// Not connected to airplay output...
}
}
The portType is the useful info here... see the AVAudioSessionPortDescription docs for the AVAudioSessionPort... constants that describe each input/output port type, such as line in/out, built in speakers, Bluetooth LE, headset mic etc.
Also, don't forget to respond appropriate to route changes by subscribing to the AVAudioSessionRouteChangeNotification.

CFArray *destinations;
CFNumber *currentDest;
// Get the output destination list
AudioSessionGetProperty(kAudioSessionProperty_OutputDestinations, nil, destinations);
// Get the index of the current destination (in the list above)
AudioSessionGetProperty(kAudioSessionProperty_OutputDestination, nil, currentDest);
Im not too sure of the exact syntax, so you'll have to mess around with it a bit, but you should get the general idea.

Related

Swift - setup photo output for RAW camera app

Kind of new to Swift in general, but I'm trying to make a simple RAW camera app for fun. Apple's documentation says that to configure a photo output, you do
let query = photoOutput.isAppleProRAWEnabled ?
{ AVCapturePhotoOutput.isAppleProRAWPixelFormat($0) } :
{ AVCapturePhotoOutput.isBayerRAWPixelFormat($0) }
// Retrieve the RAW format, favoring Apple ProRAW when enabled.
guard let rawFormat =
photoOutput.availableRawPhotoPixelFormatTypes.first(where: query) else {
fatalError("No RAW format found.")
}
but I've been getting an error with the first let statement which says "'isAppleProRAWEnabled' is only available in iOS 14.3 or newer." Is there any way to force it to check for ProRaw, even not on iOS 14.3? I'm not even interested in using ProRaw, but I can't figure out how to get rid of the check and just select the classic RAW format (which I think is the bayer format). If anyone knows a workaround, that would be great!
You can query for the Bayer RAW format as below:
let rawFormatQuery = {AVCapturePhotoOutput.isBayerRAWPixelFormat($0)}
guard let rawFormat = photoOutput.availableRawPhotoPixelFormatTypes.first(where: rawFormatQuery) else {
fatalError("No RAW format found.")
}
Then you set your photo settings using the raw format:
let photoSettings = AVCapturePhotoSettings(rawPixelFormatType: rawFormat,
processedFormat: processedFormat)
Finally, you call your capture delegate as described in the Apple documentation (which I think is where you got the code above).
https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/capturing_photos_in_raw_and_apple_proraw_formats

How to check iOS device chip is A9 or A10?

Apple announced HEVC encode support for A10 devices running iOS 11, and HEVC decode support for A9 devices running iOS 11.
Before create those hardware codec, how to check if current device is support the feature?
What is the chip? A8, A9, or A10 without hard code the model.
Don't check for the specific SOC, check for the feature you actually want. You'll need the VideoToolbox call VTIsHardwareDecodeSupported, passing the kCMVideoCodeType_HEVC key:
VTIsHardwareDecodeSupported(kCMVideoCodeType_HEVC)
However, iOS has software decoder fallbacks for HEVC if you need them.
Edit: Ah, sorry - I misread and thought you were talking about decoding. For encoding, you may be able to get what you want with VTCopySupportedPropertyDictionaryForEncoder, using the kCMVideoCodeType_HEVC and specifying the parameters you want to encode. I don't know if iOS has a fallback software encoder for HEVC, so this may give false positives.
For encoder, I could not find an official way but this seems to work in my tests:
#import <AVFoundation/AVFoundation.h>
#import <VideoToolbox/VideoToolbox.h>
- (BOOL)videoCodecTypeHevcIsSuppored {
if (#available(iOS 11, *)) {
CFMutableDictionaryRef encoderSpecification = CFDictionaryCreateMutable( NULL,0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(encoderSpecification, kVTCompressionPropertyKey_ProfileLevel, kVTProfileLevel_HEVC_Main_AutoLevel);
CFDictionarySetValue(encoderSpecification, kVTCompressionPropertyKey_RealTime, kCFBooleanTrue);
OSStatus status = VTCopySupportedPropertyDictionaryForEncoder(3840, 2160, kCMVideoCodecType_HEVC, encoderSpecification, nil, nil);
if (status == kVTCouldNotFindVideoEncoderErr) {
return NO;
}
return YES;
}
return NO;
}
kanso's great answer in Swift 4.
Assumes that we target only iOS11 or higher and adds an extra check:
import AVFoundation
import VideoToolbox
#available (iOS 11, *)
func isHEVCHardwareEncodingSupported() -> Bool {
let encoderSpecDict : [String : Any] =
[kVTCompressionPropertyKey_ProfileLevel as String : kVTProfileLevel_HEVC_Main_AutoLevel,
kVTCompressionPropertyKey_RealTime as String : true]
let status = VTCopySupportedPropertyDictionaryForEncoder(3840, 2160,
kCMVideoCodecType_HEVC,
encoderSpecDict as CFDictionary,
nil, nil)
if status == kVTCouldNotFindVideoEncoderErr {
return false
}
if status != noErr {
return false
}
return true
}

Decoding H264: VTDecompressionSessionCreate fails with error code -12910 (kVTVideoDecoderUnsupportedDataFormatErr)

I'm getting error -12910 (kVTVideoDecoderUnsupportedDataFormatErr) using VTDecompressionSessionCreate when running code on my iPad, but not on the sim. I'm using Avios (https://github.com/tidwall/Avios) and this is the relevant section:
private func initVideoSession() throws {
formatDescription = nil
var _formatDescription : CMFormatDescription?
let parameterSetPointers : [UnsafePointer<UInt8>] = [ pps!.buffer.baseAddress, sps!.buffer.baseAddress ]
let parameterSetSizes : [Int] = [ pps!.buffer.count, sps!.buffer.count ]
var status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault, 2, parameterSetPointers, parameterSetSizes, 4, &_formatDescription);
if status != noErr {
throw H264Error.CMVideoFormatDescriptionCreateFromH264ParameterSets(status)
}
formatDescription = _formatDescription!
if videoSession != nil {
VTDecompressionSessionInvalidate(videoSession)
videoSession = nil
}
var videoSessionM : VTDecompressionSession?
let decoderParameters = NSMutableDictionary()
let destinationPixelBufferAttributes = NSMutableDictionary()
destinationPixelBufferAttributes.setValue(NSNumber(unsignedInt: kCVPixelFormatType_32BGRA), forKey: kCVPixelBufferPixelFormatTypeKey as String)
var outputCallback = VTDecompressionOutputCallbackRecord()
outputCallback.decompressionOutputCallback = callback
outputCallback.decompressionOutputRefCon = UnsafeMutablePointer<Void>(unsafeAddressOf(self))
status = VTDecompressionSessionCreate(nil, formatDescription, decoderParameters, destinationPixelBufferAttributes, &outputCallback, &videoSessionM)
if status != noErr {
throw H264Error.VTDecompressionSessionCreate(status)
}
self.videoSession = videoSessionM;
}
Here pps and sps are buffers containing PPS and SPS frames.
As mentioned above, the strange thing is that it works completely fine on the simulator, but not on an actual device. Both are on iOS 9.3, and I'm simulating the same hardware as the device.
What could cause this error?
And, more generally, where can I go for API reference and error docs for VideoToolbox? Genuinely can't find anything of relevance on Apple's site.
The answer turned out to be that the stream resolution was greater than 1920x1080, which is the maximum that the iPad supports. This is a clear difference with the simulator which supports beyond that resolution (perhaps it just uses the Mac VideoToolbox libraries rather than simulating the iOS ones).
Reducing the stream to fewer pixels than 1080p solved the problem.
This is the response from a member of Apple staff which pointed me in the right direction: https://forums.developer.apple.com/thread/11637
As for proper VideoToolbox reference - still nothing of value exists, which is a massive disadvantage. One wonders how the tutorial writers first got their information.
Edit: iOS 10 now appears to support streams greater than 1080p.

iPhone get a list of all SSIDs without private library

Is it possible the get a list of all available SSIDs on the iPhone without using a private library?
I read iPhone get SSID without private library which is about getting details about the current network.
This answer mentions:
If you jailbreak your device you can use the Apple80211 private framework to look up the available Wi-Fi networks and their signal strength. But that also means your app will get rejected.
Apple has the CaptiveNetwork API but there doesn't seem to be a solution to get a list of all available networks. It seems it's only possible to do so with using the Apple80211 private library, or connecting to all of them.
Am I missing something, or is there no solution?
Without the use of private library (Apple80211) you can only get the SSID of the network your device is currently connected to.
Since iOS 9, you can use NEHotspotHelper to get a list of SSIDs. But you have to get the com.apple.developer.networking.HotspotHelper entitlement from Apple by sending a request.
Check https://developer.apple.com/documentation/networkextension/nehotspothelper for more infomation.
Some new APIs have been released as part of the Network Extension in iOS 9 and iOS 11. While neither allows you to scan for networks while your app is running, they both allow you to do related tasks. E.g. you can scan for networks while the Settings Wi-Fi page is running using Hotspot Helper, and you can make it easier for a user to join a network using either of these.
Here's a comparison of the two frameworks.
Hotspot Helper
NEHotspotHelper (introduced in iOS 9, WWDC 2015).
Requires special permission from Apple.
Requires the com.apple.developer.networking.HotspotHelper entitlement.
For step-by-step instructions to get this working, see this answer.
Allows you to participate in the discovery/authentication to a Wi-Fi network via the Wi-Fi screen in the Settings app. You register to be notified when networks are being scanned (e.g. when the user launches Wi-Fi in the Settings app), and you can automatically pre-fill the password and display an annotation near the network name. The user still needs to tap on the network name to connect, but it won't prompt for a password if you pre-filled it.
Hotspot Configuration
NEHotspotConfigurationManager (introduced in iOS 11, WWDC 2017).
Does not require special permission from Apple.
Requires the com.apple.developer.networking.HotspotConfiguration entitlement.
Allows you to initiate a connection to a Wi-Fi network. You give it a list of SSIDs/Passwords that should be connected to while your app is running. It will present a dialog asking the user if they want to connect to the network.
Step 1: add the framework SystemConfiguration.framework
Step 2: import following header file
import SystemConfiguration
import SystemConfiguration.CaptiveNetwork
Step 3: Now Use Code:
func getUsedSSID()->String {
let interfaces = CNCopySupportedInterfaces()
if interfaces != nil {
let interfacesArray = CFBridgingRetain(interfaces) as! NSArray
if interfacesArray.count > 0 {
let interfaceName = interfacesArray[0] as! String
let unsafeInterfaceData = CNCopyCurrentNetworkInfo(interfaceName)! as Dictionary
let SSIDName = unsafeInterfaceData["SSID"] as! String
print(SSIDName)/* here print recentally used wifi name*/
return SSIDName
}else{
return "0"
}
}else{
return "0"
}
}
First of All import above two system Header file
import SystemConfiguration/SystemConfiguration.h
import SystemConfiguration/CaptiveNetwork.h
below Function/Method Return SSIDName
-(NSString *)getNetworkId{
NSString *string = CFBridgingRelease(CNCopySupportedInterfaces());
NSArray *interfacesArray = CFBridgingRelease(CFBridgingRetain(string));
if(interfacesArray.count > 0){
NSString *networkName = [interfacesArray objectAtIndex:0];
CFStringRef yourFriendlyCFString = (__bridge CFStringRef)networkName;
NSDictionary *unsafeInterfaceData = CFBridgingRelease(CNCopyCurrentNetworkInfo(yourFriendlyCFString));
NSString *ssidName = unsafeInterfaceData[#"SSID"];
return ssidName;
}
return #"No network Found";
}
#import SystemConfiguration#
##import SystemConfiguration.CaptiveNetwork##
//create variable
var SSIDNameArray = NSMutableArray()
var nameArray : NSArray = [];
// Here function to return all SSIDName
func getUsedSSID()->NSArray{
let interfaces = CNCopySupportedInterfaces()
if interfaces != nil {
let interfacesArray = CFBridgingRetain(interfaces) as! NSArray
if interfacesArray.count > 0 {
for interfaceName in interfacesArray {
let unsafeInterfaceData = CNCopyCurrentNetworkInfo(interfaceName as! CFString)! as NSDictionary
let SSIDName = unsafeInterfaceData["SSID"] as! String
self.SSIDNameArray .add(SSIDName)
}
nameArray = self.SSIDNameArray .copy() as! NSArray
return nameArray;
}else{
return nameArray;
}
}else{
return nameArray;
}
}

Knowing resolution of AVCaptureSession's session presets

I'm accessing the camera in iOS and using session presets as so:
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
Pretty standard stuff. However, I'd like to know ahead of time the resolution of the video I'll be getting due to this preset (especially because depending on the device it'll be different). I know there are tables online you can look this up (such as here: http://cmgresearch.blogspot.com/2010/10/augmented-reality-on-iphone-with-ios40.html ). But I'd like to be able to get this programmatically so that I'm not just relying on magic numbers.
So, something like this (theoretically):
[captureSession resolutionForPreset:AVCaptureSessionPresetMedium];
which might return a CGSize of { width: 360, height: 480}. I have not been able to find any such API, so far I've had to resort to waiting to get my first captured image and querying it then (which for other reasons in my program flow is not good).
I am no AVFoundation pro, but I think the way to go is:
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
AVCaptureInput *input = [captureSession.inputs objectAtIndex:0]; // maybe search the input in array
AVCaptureInputPort *port = [input.ports objectAtIndex:0];
CMFormatDescriptionRef formatDescription = port.formatDescription;
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription);
I'm not sure about the last step and I didn't try it myself. Just found that in the documentation and think it should work.
Searching for CMVideoDimensions in Xcode you'll find the RosyWriter example project. Have a look at that code (I don't have time to do that now).
You can programmatically get the resolution from activeFormat before capture begins, though not before adding inputs and outputs: https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureDevice_Class/index.html#//apple_ref/occ/instp/AVCaptureDevice/activeFormat
private func getCaptureResolution() -> CGSize {
// Define default resolution
var resolution = CGSize(width: 0, height: 0)
// Get cur video device
let curVideoDevice = useBackCamera ? backCameraDevice : frontCameraDevice
// Set if video portrait orientation
let portraitOrientation = orientation == .Portrait || orientation == .PortraitUpsideDown
// Get video dimensions
if let formatDescription = curVideoDevice?.activeFormat.formatDescription {
let dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription)
resolution = CGSize(width: CGFloat(dimensions.width), height: CGFloat(dimensions.height))
if (portraitOrientation) {
resolution = CGSize(width: resolution.height, height: resolution.width)
}
}
// Return resolution
return resolution
}
FYI, I attach here an official reply from Apple.
This is a follow-up to Bug ID# 13201137.
Engineering has determined that this issue behaves as intended based on the following information:
There are several problems with the included code:
1) The AVCaptureSession has no inputs.
2) The AVCaptureSession has no outputs.
Without at least one input (added to the session using [AVCaptureSession addInput:]) and a compatible output (added using [AVCaptureSession addOutput:]), there will be no active connections, therefore, the session won't actually run in the input device. It doesn't need to -- there are no outputs to which to deliver any camera data.
3) The JAViewController class assumes that the video port's -formatDescription property will be non nil as soon as [AVCaptureSession startRunning] returns.
There is no guarantee that the format description will be updated with the new camera format as soon as startRunning returns. -startRunning starts up the camera and returns when it is completely up and running, but doesn't wait for video frames to be actively flowing through the capture pipeline, which is when the format description would be updated.
You're just querying too fast. If you waited a few milliseconds more, it would be there. But the right way to do this is to listen for the AVCaptureInputPortFormatDescriptionDidChangeNotification.
4) Your JAViewController class creates a PVCameraInfo object in retrieveCameraInfo: and asks it a question, then lets it fall out of scope, where it is released and dealloc'ed.
Therefore, the session doesn't have long enough to run to satisfy your dimensions request. You stop the camera too quickly.
We consider this issue closed. If you have any questions or concern regarding this issue, please update your report directly (http://bugreport.apple.com).
Thank you for taking the time to notify us of this issue.
Best Regards,
Developer Bug Reporting Team
Apple Worldwide Developer Relations
According to Apple, there's no API for that. It stinks, I've had the same problem.
May be you can provide a list of all posible preset resolutions for every iPhone model and check which device model the app is running on? - using something like this...
[[UIDevice currentDevice] platformType] // ex: UIDevice4GiPhone
[[UIDevice currentDevice] platformString] // ex: #"iPhone 4G"
However, you have to update the list for each newer device model. Hope this helps :)
if preset is .photo, the return size is for still photo size, not preview video size
if preset is not .photo, the return size is for video size, not for captured photo size.
if self.session.sessionPreset != .photo {
// return video size, not captured photo size
let format = videoDevice.activeFormat
let formatDescription = format.formatDescription
let dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription)
} else {
// other way to get video size
}
Answer of #Christian Beer is a good way for specified preset.
My way is a good for active preset.
The best way to do what you want (get a known video or image format) is to set the format of the capture device.
First find the capture device you want to use:
if #available(iOS 10.0, *) {
captureDevice = defaultCamera()
} else {
let devices = AVCaptureDevice.devices()
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if ((device as AnyObject).hasMediaType(AVMediaType.video)) {
// Finally check the position and confirm we've got the back camera
if((device as AnyObject).position == AVCaptureDevice.Position.back) {
captureDevice = device as AVCaptureDevice
}
}
}
}
self.autoLevelWindowCenter = ALCWindow.frame
if captureDevice != nil && currentUser != nil {
beginSession()
}
}
func defaultCamera() -> AVCaptureDevice? {
if #available(iOS 10.0, *) { // only use the wide angle camera never dual camera
if let device = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera,
for: AVMediaType.video,
position: .back) {
return device
} else {
return nil
}
} else {
return nil
}
}
Then find the formats that that device can use:
let options = captureDevice!.formats
var supportable = options.first as! AVCaptureDevice.Format
for format in options {
let testFormat = format
let description = testFormat.description
if (description.contains("60 fps") && description.contains("1280x 720")){
supportable = testFormat
}
}
You can do more complex parsing of the formats, but you might not care.
Then just set the device to that format:
do {
try captureDevice?.lockForConfiguration()
captureDevice!.activeFormat = supportable
// setup other capture device stuff like autofocus, frame rate, ISO, shutter speed, etc.
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice!))
// add the device to an active CaptureSession
}
You may want to look at the AVFoundation docs and tutorial on AVCaptureSession as there are lots of things you can do with the output as well. For example, you can convert the result to .mp4 using AVAssetExportSession so that you can post it on YouTube, etc.
Hope this helps
Apple is using 4:3 ratio for the iPhone camera.
You can you this ratio to get the frame size of the captured video by fixing either the width or height constraint of the AVCaptureVideoPreviewLayer and set the aspect ratio constraint to 4:3.
In the left image, the width was fixed to 300px and the height was retrieved by setting the 4:3 ratio, and it was 400px.
In the right image, the height was fixed to 300px and width was retrieved by setting the 3:4 ratio, and it was 225px.

Resources