Swift : No Matter what I do CIDetector is always nil - ios

i don't understand why this code doesn't work, the detector is always nil with the CIDetectorTypeQRCode constant, everything work with CIDetectorTypeFace.
I Supect a bug from API of Apple. This a the official doc : Apple documentation
#IBAction func analyseTag(sender: AnyObject) {
var detector:CIDetector = CIDetector(ofType: CIDetectorTypeQRCode, context:nil, options:[CIDetectorAccuracy: CIDetectorAccuracyHigh])
var decode = ""
var ciImage:CIImage = CIImage(image: ImgChoosed.image)
var message:String = "";
let features = detector.featuresInImage(ciImage)
for feature in features as [CIQRCodeFeature] {
message += feature.messageString
}
if(message == "") {
println("nothing")
} else {
println("\(message)")
}
}
Have you a solution?
Thank in advance guy's

The code you provided can't have a nil detector because it's not an optional and the compiler would complain about several places in your code if it was.
If features is empty then you know it didn't find a QR code in your image. Try providing a better image or turning down the CIDetectorAccuracy.
If features isn't empty then your cast is failing.
Edit:
You can't pass a nil context in the constructor.

This happened to us as well. iPhone 4s doesn't return a CIDetector of type QRCode. The other types (rectangle, face) work though…
The same code works as expected on the iPhone 6. Haven't tested on a 5 or 5s yet.
But two weeks ago it was still working on the 4s, I believe. It was still on iOS 8 back then, I guess.

Make sure your ImgChoosed.image is not nil.
Change another input image for testing.
Try for feature in features as! [CIQRCodeFeature].

I found that using on a device resolved this issue. The simulator seemed to always return nil for me.

Related

Why is the Vision framework unable to align two images?

I'm trying to take two images using the camera, and align them using the iOS Vision framework:
func align(firstImage: CIImage, secondImage: CIImage) {
let request = VNTranslationalImageRegistrationRequest(
targetedCIImage: firstImage) {
request, error in
if error != nil {
fatalError()
}
let observation = request.results!.first
as! VNImageTranslationAlignmentObservation
secondImage = secondImage.transformed(
by: observation.alignmentTransform)
let compositedImage = firstImage!.applyingFilter(
"CIAdditionCompositing",
parameters: ["inputBackgroundImage": secondImage])
// Save the compositedImage to the photo library.
}
try! visionHandler.perform([request], on: secondImage)
}
let visionHandler = VNSequenceRequestHandler()
But this produces grossly mis-aligned images:
You can see that I've tried three different types of scenes — a close-up subject, an indoor scene, and an outdoor scene. I tried more outdoor scenes, and the result is the same in almost every one of them.
I was expecting a slight misalignment at worst, but not such a complete misalignment. What is going wrong?
I'm not passing the orientation of the images into the Vision framework, but that shouldn't be a problem for aligning images. It's a problem only for things like face detection, where a rotated face isn't detected as a face. In any case, the output images have the correct orientation, so orientation is not the problem.
My compositing code is working correctly. It's only the Vision framework that's a problem. If I remove the calls to the Vision framework, put the phone of a tripod, the composition works perfectly. There's no misalignment. So the problem is the Vision framework.
This is on iPhone X.
How do I get Vision framework to work correctly? Can I tell it to use gyroscope, accelerometer and compass data to improve the alignment?
You should set secondImage as targetImage, and perform handler with firstImage.
I use your composite way.
check out this example from MLBoy:
let request = VNTranslationalImageRegistrationRequest(targetedCIImage: image2, options: [:])
let handler = VNImageRequestHandler(ciImage: image1, options: [:])
do {
try handler.perform([request])
} catch let error {
print(error)
}
guard let observation = request.results?.first as? VNImageTranslationAlignmentObservation else { return }
let alignmentTransform = observation.alignmentTransform
image2 = image2.transformed(by: alignmentTransform)
let compositedImage = image1.applyingFilter("CIAdditionCompositing", parameters: ["inputBackgroundImage": image2])

iOS - PhotosKit - Troubles identifying modified assets

I'm working with the Photos framework, specifically I'd like to keep track of the current camera roll status, thus updating it every time assets are added, deleted or modified (mainly when a picture is edited by the user - e.g a filter is added, image is cropped).
My first implementation would look something like the following:
private var lastAssetFetchResult : PHFetchResult<PHAsset>?
func photoLibraryDidChange(_ changeInstance: PHChange) {
guard let fetchResult = lastAssetFetchResult,
let details = changeInstance.changeDetails(for: fetchResult) else {return}
let modified = details.changedObjects
let removed = details.removedObjects
let added = details.insertedObjects
// update fetch result
lastAssetFetchResult = details.fetchResultAfterChanges
// do stuff with modified, removed, added
}
However, I soon found out that details.changedObjects would not contain only the assets that have been modified by the user, so I moved to the following implementation:
let modified = modifiedAssets(changeInstance: changeInstance)
with:
func modifiedAssets(changeInstance: PHChange) -> [PHAsset] {
var modified : [PHAsset] = []
lastAssetFetchResult?.enumerateObjects({ (obj, _, _) in
if let detail = changeInstance.changeDetails(for: obj) {
if detail.assetContentChanged {
if let updatedObj = detail.objectAfterChanges {
modified.append(updatedObj)
}
}
}
})
return modified
}
So, relying on the PHObjectChangeDetails.assetContentChanged
property, which, as documentation states indicates whether the asset’s photo or video content has changed.
This brought the results closer to the ones I was expecting, but still, I'm not entirely understanding its behavior.
On some devices (e.g. iPad Mini 3) I get the expected result (assetContentChanged = true) in all the cases that I tested, whereas on others (e.g. iPhone 6s Plus, iPhone 7) it's hardly ever matching my expectation (assetContentChanged is false even for assets that I cropped or added filters to).
All the devices share the latest iOS 11.2 version.
Am I getting anything wrong?
Do you think I could achieve my goal some other way?
Thank you in advance.

nil value returned by NSClassFromString swift 2.0

I'm using the following code in my project to draw fading on a view:
let customBlurClass: AnyObject.Type = NSClassFromString("_UICustomBlurEffect")!
let customBlurObject: NSObject.Type = customBlurClass as! NSObject.Type
self.blurEffect = customBlurObject.init() as! UIBlurEffect
self.blurEffect.setValue(1.0, forKeyPath: "scale")
self.blurEffect.setValue(radius, forKeyPath: "blurRadius")
super.init(effect: radius == 0 ? nil : self.blurEffect)
sometimes on Fabric I get crash report from the app on this line:
let customBlurClass: AnyObject.Type = NSClassFromString("_UICustomBlurEffect")!
which means that the NSClassFromString return nil value,
I searched a lot about this problem but no useful answers,
Please Help,
Thanks.
The most likely explanation is that those crashes occur on devices running iOS 8 or earlier. _UICustomBlurEffect was introduced in iOS 9.
You should do:
if let blurClass = NSClassFromString("_UICustomBlurEffect") {
// set up blur view
}
to avoid crashes on devices where it's not supported.

How to detect First launch of app on every new version upgrade?

I have a requirement of detecting the first launch of app after the user upgrades the app to a newer version. I need to perform certain task only on first launch of app after the user upgrades the app to a new version. Many links available online but none answer clearly to my query. How to achieve this in Swift 2 , iOS 9.
Most of the answers available says to maintain a key in NSUserDefaults and set its value to false and after first launch make it true. But the problem is after I upgrade my app the variable still will be true and thus my scenario fails on app upgrade. Any help would be much appreciated. Thanks!
Try this:
let existingVersion = NSUserDefaults.standardUserDefaults().objectForKey("CurrentVersionNumber") as? String
let appVersionNumber = NSBundle.mainBundle().objectForInfoDictionaryKey("CFBundleShortVersionString") as! String
if existingVersion != appVersionNumber {
NSUserDefaults.standardUserDefaults().setObject(appVersionNumber, forKey: "CurrentVersionNumber")
NSUserDefaults.standardUserDefaults().synchronize()
//You can handle your code here
}
updating Yogesh's perfect, yet simple solution to swift 4
let existingVersion = UserDefaults.standard.object(forKey: "CurrentVersionNumber") as? String
let appVersionNumber = Bundle.main.object(forInfoDictionaryKey: "CFBundleShortVersionString") as! String
if existingVersion != appVersionNumber {
print("existingVersion = \(String(describing: existingVersion))")
UserDefaults.standard.set(appVersionNumber, forKey: "CurrentVersionNumber")
// run code here.
}

Decoding H264: VTDecompressionSessionCreate fails with error code -12910 (kVTVideoDecoderUnsupportedDataFormatErr)

I'm getting error -12910 (kVTVideoDecoderUnsupportedDataFormatErr) using VTDecompressionSessionCreate when running code on my iPad, but not on the sim. I'm using Avios (https://github.com/tidwall/Avios) and this is the relevant section:
private func initVideoSession() throws {
formatDescription = nil
var _formatDescription : CMFormatDescription?
let parameterSetPointers : [UnsafePointer<UInt8>] = [ pps!.buffer.baseAddress, sps!.buffer.baseAddress ]
let parameterSetSizes : [Int] = [ pps!.buffer.count, sps!.buffer.count ]
var status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault, 2, parameterSetPointers, parameterSetSizes, 4, &_formatDescription);
if status != noErr {
throw H264Error.CMVideoFormatDescriptionCreateFromH264ParameterSets(status)
}
formatDescription = _formatDescription!
if videoSession != nil {
VTDecompressionSessionInvalidate(videoSession)
videoSession = nil
}
var videoSessionM : VTDecompressionSession?
let decoderParameters = NSMutableDictionary()
let destinationPixelBufferAttributes = NSMutableDictionary()
destinationPixelBufferAttributes.setValue(NSNumber(unsignedInt: kCVPixelFormatType_32BGRA), forKey: kCVPixelBufferPixelFormatTypeKey as String)
var outputCallback = VTDecompressionOutputCallbackRecord()
outputCallback.decompressionOutputCallback = callback
outputCallback.decompressionOutputRefCon = UnsafeMutablePointer<Void>(unsafeAddressOf(self))
status = VTDecompressionSessionCreate(nil, formatDescription, decoderParameters, destinationPixelBufferAttributes, &outputCallback, &videoSessionM)
if status != noErr {
throw H264Error.VTDecompressionSessionCreate(status)
}
self.videoSession = videoSessionM;
}
Here pps and sps are buffers containing PPS and SPS frames.
As mentioned above, the strange thing is that it works completely fine on the simulator, but not on an actual device. Both are on iOS 9.3, and I'm simulating the same hardware as the device.
What could cause this error?
And, more generally, where can I go for API reference and error docs for VideoToolbox? Genuinely can't find anything of relevance on Apple's site.
The answer turned out to be that the stream resolution was greater than 1920x1080, which is the maximum that the iPad supports. This is a clear difference with the simulator which supports beyond that resolution (perhaps it just uses the Mac VideoToolbox libraries rather than simulating the iOS ones).
Reducing the stream to fewer pixels than 1080p solved the problem.
This is the response from a member of Apple staff which pointed me in the right direction: https://forums.developer.apple.com/thread/11637
As for proper VideoToolbox reference - still nothing of value exists, which is a massive disadvantage. One wonders how the tutorial writers first got their information.
Edit: iOS 10 now appears to support streams greater than 1080p.

Resources