How to check iOS device chip is A9 or A10? - ios

Apple announced HEVC encode support for A10 devices running iOS 11, and HEVC decode support for A9 devices running iOS 11.
Before create those hardware codec, how to check if current device is support the feature?
What is the chip? A8, A9, or A10 without hard code the model.

Don't check for the specific SOC, check for the feature you actually want. You'll need the VideoToolbox call VTIsHardwareDecodeSupported, passing the kCMVideoCodeType_HEVC key:
VTIsHardwareDecodeSupported(kCMVideoCodeType_HEVC)
However, iOS has software decoder fallbacks for HEVC if you need them.
Edit: Ah, sorry - I misread and thought you were talking about decoding. For encoding, you may be able to get what you want with VTCopySupportedPropertyDictionaryForEncoder, using the kCMVideoCodeType_HEVC and specifying the parameters you want to encode. I don't know if iOS has a fallback software encoder for HEVC, so this may give false positives.

For encoder, I could not find an official way but this seems to work in my tests:
#import <AVFoundation/AVFoundation.h>
#import <VideoToolbox/VideoToolbox.h>
- (BOOL)videoCodecTypeHevcIsSuppored {
if (#available(iOS 11, *)) {
CFMutableDictionaryRef encoderSpecification = CFDictionaryCreateMutable( NULL,0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(encoderSpecification, kVTCompressionPropertyKey_ProfileLevel, kVTProfileLevel_HEVC_Main_AutoLevel);
CFDictionarySetValue(encoderSpecification, kVTCompressionPropertyKey_RealTime, kCFBooleanTrue);
OSStatus status = VTCopySupportedPropertyDictionaryForEncoder(3840, 2160, kCMVideoCodecType_HEVC, encoderSpecification, nil, nil);
if (status == kVTCouldNotFindVideoEncoderErr) {
return NO;
}
return YES;
}
return NO;
}

kanso's great answer in Swift 4.
Assumes that we target only iOS11 or higher and adds an extra check:
import AVFoundation
import VideoToolbox
#available (iOS 11, *)
func isHEVCHardwareEncodingSupported() -> Bool {
let encoderSpecDict : [String : Any] =
[kVTCompressionPropertyKey_ProfileLevel as String : kVTProfileLevel_HEVC_Main_AutoLevel,
kVTCompressionPropertyKey_RealTime as String : true]
let status = VTCopySupportedPropertyDictionaryForEncoder(3840, 2160,
kCMVideoCodecType_HEVC,
encoderSpecDict as CFDictionary,
nil, nil)
if status == kVTCouldNotFindVideoEncoderErr {
return false
}
if status != noErr {
return false
}
return true
}

Related

AVAssetWriter codec type hevc

I a trying to transcode an H264 video to HEVC using AVAssetWriter and it fails on iPhone 6s. Supposedly, the iPhone 6s supports HEVC for transcoding, not real-time video encoding. The same code works on iPhone 7 and above. If the iPhone 6s doesn't support the HEVC codec, how do we programmatically determine supported codecs at runtime?
let bitrate = trackBitrate / 5
let trackDimensions = trackSize
let compressionSettings: [String: Any] = [
AVVideoAverageBitRateKey: bitrate,
AVVideoMaxKeyFrameIntervalKey: 30,
AVVideoProfileLevelKey: kVTProfileLevel_HEVC_Main_AutoLevel
]
var videoSettings: [String : Any] = [
AVVideoWidthKey: trackDimensions.width,
AVVideoHeightKey: trackDimensions.height,
AVVideoCompressionPropertiesKey: compressionSettings
]
videoSettings[AVVideoCodecKey] = AVVideoCodecType.hevc
I ended up doing it this way
if #available(iOS 11.0, *), AVCaptureVideoDataOutput().availableVideoCodecTypes.contains(.hevc) {
// use .hevc settings here
} else {
// use .h264 settings here
}
The #available check is needed to make the compiler happy if your app is targeting < iOS 11
You can get the iPhone model by the following code:
+ (NSString *) deviceModel {
struct utsname systemInfo;
uname(&systemInfo);
return [NSString stringWithCString: systemInfo.machine encoding: NSUTF8StringEncoding];
}
and determine if iPhone 6S disable H265 encode and iPhone7 above enable H265 encode.

How to Use Obsoleted Syntax of Swift 3 in Swift 4

To record video, while setting video codec as below:
sessionOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecTypeJPEG]
XCode says 'AVVideoCodecTypeJPEG' has been renamed to 'AVVideoCodecType.jpeg' and 'AVVideoCodecTypeJPEG' was obsoleted in Swift 3 and it suggests Replace 'AVVideoCodecTypeJPEG' with 'AVVideoCodecType.jpeg'
After doing that, XCode says 'jpeg' is only available on iOS 11.0 or newer.
The problem is I have to use iOS 10 and want to use Swift 4.
Is there any solution to use features like this in Swift 4 with iOS 10?
I think the right way to solve such issue is to use the new AVVideoCodecType.jpeg and the deprecated one AVVideoCodecJPEG, doing so:
if #available(iOS 11.0, *) {
sessionOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecType.jpeg]
} else {
sessionOutput.outputSettings = [AVVideoCodecKey : AVVideoCodecJPEG]
}

Decoding H264: VTDecompressionSessionCreate fails with error code -12910 (kVTVideoDecoderUnsupportedDataFormatErr)

I'm getting error -12910 (kVTVideoDecoderUnsupportedDataFormatErr) using VTDecompressionSessionCreate when running code on my iPad, but not on the sim. I'm using Avios (https://github.com/tidwall/Avios) and this is the relevant section:
private func initVideoSession() throws {
formatDescription = nil
var _formatDescription : CMFormatDescription?
let parameterSetPointers : [UnsafePointer<UInt8>] = [ pps!.buffer.baseAddress, sps!.buffer.baseAddress ]
let parameterSetSizes : [Int] = [ pps!.buffer.count, sps!.buffer.count ]
var status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault, 2, parameterSetPointers, parameterSetSizes, 4, &_formatDescription);
if status != noErr {
throw H264Error.CMVideoFormatDescriptionCreateFromH264ParameterSets(status)
}
formatDescription = _formatDescription!
if videoSession != nil {
VTDecompressionSessionInvalidate(videoSession)
videoSession = nil
}
var videoSessionM : VTDecompressionSession?
let decoderParameters = NSMutableDictionary()
let destinationPixelBufferAttributes = NSMutableDictionary()
destinationPixelBufferAttributes.setValue(NSNumber(unsignedInt: kCVPixelFormatType_32BGRA), forKey: kCVPixelBufferPixelFormatTypeKey as String)
var outputCallback = VTDecompressionOutputCallbackRecord()
outputCallback.decompressionOutputCallback = callback
outputCallback.decompressionOutputRefCon = UnsafeMutablePointer<Void>(unsafeAddressOf(self))
status = VTDecompressionSessionCreate(nil, formatDescription, decoderParameters, destinationPixelBufferAttributes, &outputCallback, &videoSessionM)
if status != noErr {
throw H264Error.VTDecompressionSessionCreate(status)
}
self.videoSession = videoSessionM;
}
Here pps and sps are buffers containing PPS and SPS frames.
As mentioned above, the strange thing is that it works completely fine on the simulator, but not on an actual device. Both are on iOS 9.3, and I'm simulating the same hardware as the device.
What could cause this error?
And, more generally, where can I go for API reference and error docs for VideoToolbox? Genuinely can't find anything of relevance on Apple's site.
The answer turned out to be that the stream resolution was greater than 1920x1080, which is the maximum that the iPad supports. This is a clear difference with the simulator which supports beyond that resolution (perhaps it just uses the Mac VideoToolbox libraries rather than simulating the iOS ones).
Reducing the stream to fewer pixels than 1080p solved the problem.
This is the response from a member of Apple staff which pointed me in the right direction: https://forums.developer.apple.com/thread/11637
As for proper VideoToolbox reference - still nothing of value exists, which is a massive disadvantage. One wonders how the tutorial writers first got their information.
Edit: iOS 10 now appears to support streams greater than 1080p.

IOS Swift read PCM Buffer

I have a project for Android reading a short[] array with PCM data from microphone Buffer for live analysis. I need to convert this functionality to iOS Swift. In Android it is very simple and looks like this..
import android.media.AudioFormat;
import android.media.AudioRecord;
...
AudioRecord recorder = new AudioRecord(MediaRecorder.AudioSource.DEFAULT, someSampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, AudioRecord.getMinBufferSize(...));
recorder.startRecording();
later I read the buffer with
recorder.read(data, offset, length); //data is short[]
(That's what i'm looking for)
Documentation: https://developer.android.com/reference/android/media/AudioRecord.html
I'm very new to Swift and iOS. I've read a lot of documentation about AudioToolkit, ...Core and whatever. All I found is C++/Obj-C and Bridging Swift Header solutions. Thats much to advanced and outdated for me.
For now I can read PCM-Data to a CAF-File with AVFoundation
settings = [
AVLinearPCMBitDepthKey: 16 as NSNumber,
AVFormatIDKey: Int(kAudioFormatLinearPCM),
AVLinearPCMIsBigEndianKey: 0 as NSNumber,
AVLinearPCMIsFloatKey: 0 as NSNumber,
AVSampleRateKey: 12000.0,
AVNumberOfChannelsKey: 1 as NSNumber,
]
...
recorder = try AVAudioRecorder(URL: someURL, settings: settings)
recorder.delegate = self
recorder.record()
But that's not what I'm looking for (or?). Is there an elegant way to achieve the android read functionality described above? I need to get a sample-array from the microphone buffer. Or do i need to do the reading on the recorded CAF file?
Thanks a lot! Please help me with easy explanations or code examples. iOS terminology is not mine yet ;-)
If you don't mind floating point samples and 48kHz, you can quickly get audio data from the microphone like so:
let engine = AVAudioEngine() // instance variable
func setup() {
let input = engine.inputNode!
let bus = 0
input.installTapOnBus(bus, bufferSize: 512, format: input.inputFormatForBus(bus)) { (buffer, time) -> Void in
let samples = buffer.floatChannelData[0]
// audio callback, samples in samples[0]...samples[buffer.frameLength-1]
}
try! engine.start()
}

Audio Output Routes for AirPlay

I have looked but can't find a way to access the Audio Output Routes so i can detect if the audio is coming out via AirPlay.
This is what i found in the Documentation for iOS 5.0
kAudioSessionOutputRoute_AirPlay
Discussion
These strings are used as values for the kAudioSession_AudioRouteKey_Type key for the dictionary associated with the kAudioSession_AudioRouteKey_Outputs array.
I can't find a way to get access to the kAudioSession_AudioRouteKey_Outputs array.
Thanks
Even if Bassem seems to have found a solution, for completion's sake, here's how to detect whether the current output route is AirPlay or not:
- (BOOL)isAirPlayActive{
CFDictionaryRef currentRouteDescriptionDictionary = nil;
UInt32 dataSize = sizeof(currentRouteDescriptionDictionary);
AudioSessionGetProperty(kAudioSessionProperty_AudioRouteDescription, &dataSize, &currentRouteDescriptionDictionary);
if (currentRouteDescriptionDictionary) {
CFArrayRef outputs = CFDictionaryGetValue(currentRouteDescriptionDictionary, kAudioSession_AudioRouteKey_Outputs);
if (outputs) {
if(CFArrayGetCount(outputs) > 0) {
CFDictionaryRef currentOutput = CFArrayGetValueAtIndex(outputs, 0);
CFStringRef outputType = CFDictionaryGetValue(currentOutput, kAudioSession_AudioRouteKey_Type);
return (CFStringCompare(outputType, kAudioSessionOutputRoute_AirPlay, 0) == kCFCompareEqualTo);
}
}
}
return NO;
}
Keep in mind that you have to #import <AudioToolbox/AudioToolbox.h> and link against the AudioToolbox framework.
Since iOS 6, the recommended approach for this would be using AVAudioSession (the C-based AudioSession API is deprecated as of iOS 7).
let currentRoute = AVAudioSession.sharedInstance().currentRoute
currentRoute returns an AVAudioSessionRouteDescription, a very simple class with two properties: inputs and outputs. Each of these is an optional array of AVAudioSessionPortDescriptions, which provides the information we need about the current route:
if let outputs = currentRoute?.outputs as? [AVAudioSessionPortDescription] {
// Usually, there will be just one output port (or none), but let's play it safe...
if let airplayOutputs = outputs.filter { $0.portType == AVAudioSessionPortAirPlay } where !airplayOutputs.isEmpty {
// Connected to airplay output...
} else {
// Not connected to airplay output...
}
}
The portType is the useful info here... see the AVAudioSessionPortDescription docs for the AVAudioSessionPort... constants that describe each input/output port type, such as line in/out, built in speakers, Bluetooth LE, headset mic etc.
Also, don't forget to respond appropriate to route changes by subscribing to the AVAudioSessionRouteChangeNotification.
CFArray *destinations;
CFNumber *currentDest;
// Get the output destination list
AudioSessionGetProperty(kAudioSessionProperty_OutputDestinations, nil, destinations);
// Get the index of the current destination (in the list above)
AudioSessionGetProperty(kAudioSessionProperty_OutputDestination, nil, currentDest);
Im not too sure of the exact syntax, so you'll have to mess around with it a bit, but you should get the general idea.

Resources