iPad QR scanner - how to adjust Zxing scanner's focus area - ios

Currently the rectangle frame is almost take up the whole screen, may I know if there is anyway to reduce the focus area?
Because I found that if I use an iPhone app which has Zxing built in, in the iPad, the efficiency is better than the iPad app.
So I'm trying to reduce the focus area, hopefully this could yield me a better result in iPad.

Are you talking about an iPad 3? The iPad 2 has a fixed-focus camera. There has recently been an issue detected with the iPad 3 support which led to poor decode rates, particularly for dense codes. It's been partially fixed by adjusting the resolution ZXing asks iOS for, but the fix isn't complete at this point.
Or are you just thinking about cropping down the region ZXing looks at to detect a code? This is unlikely to produce better decode rates. Given the resolution ZXing asks iOS for, it can scan an entire capture image very quickly. In theory, extraneous clutter could confuse it and cropping could reduce that, but I wouldn't spend any effort trying to improve the cropping until I was sure that confusion was really happening. I haven't seen any evidence of it.

Related

Unity game lag when test on iOS devices

I recently create a small puzzle game in Unity, just a simple one, not fancy effect or anything. It really smooth when I test run on Unity.
FPS normally cap at 200 and on the largest resolution its around 80 - 120 FPS - super smooth. After that, I build an iOS version and test on ios device, it's quite laggy. I tested on iPhone 6+, iPhone X, iPad 9.5 inches, and the outcome still the same, its a little bit lag. Maybe be I need to adjust some graphics settings on Unity ? Please I need some advice from you guys. Thanks for reading.
You can try a few things.
At the very beginning of your app, keep a target frame rate to 30.
Application.targetFrameRate = 30;
Downgrade quality settings to medium. Within that, also disable or dull down things related to lighting in case yours is a simple 2D game.
Optimize art. Pack art in packing tags, and on iOS, keep their compression at PVRTC. Only the ones looking really bad after compression should be RGB24 or RGBA32. Disable options like Generate Physics shape(if you're not using that), and Generate mipmaps.
Have a look at your UI. Anything in UI that is not interactive(like simple images, or texts, which are not buttons or input texts, etc) should have Raycasting off. The Rich Text option in texts too should be off if that is not affecting your app specifically.

ARKit little jumps during track

I've been using ARkit and I'm loving it, but I noticed that during load tracking can gut a little jumpy (suddenly objects jump from their position a little bit off, like 1-3 cms). I've been wondering if there's a way to smooth out these jumps so it wouldn't be so distracting. Here is a short video demonstrating it.
https://youtu.be/wmMBjlLyK7w
I have been using ARKit and am also loving it I have been experiencing these issues as well and I have my theories but I am positive it is an issue with the hardware (comment which device you are using and I might be able to give a better estimate)
I believe it is the cameras on our devices and if that is the case then I would not worry about it too much because that would mean a behind the scenes problem we cant change or alter
If I'm not mistaken I remember Apple saying something about this in one of their developer classes earlier in this months keynote as I said before I wouldn't worry about it because older devices will have a harder time with the tracking because of the poorer cameras

Take photos with "portrait" mode (bokeh) in iOS 11 / iPhone 7/8plus in 3rd-party app?

The iPhone 7 plus and 8 plus (and X) have an effect in the native camera app called "Portrait mode", which simulates a bokeh-like effect by using depth data to blur the background.
I want to add the capability to take photos with this effect in my own app.
I can see that in iOS 11, depth data is available. But I have no idea how to use this to achieve the effect.
Am I missing something -- is it possible to turn on this effect somewhere and just get the image with it applied, rather than having to try and make this complicated algorithm myself?
cheers
Unfortunately portrait mode and portrait lighting aren't open to developers as of iOS 11 so you would have to implement a similar effect on your own. Capturing Depth in iPhone Photography and Image Editing with Depth from this years WWDC go into detail on how to capture and edit images with depth data.
There are two sample projects on the developer site that show you how to capture and visualize depth data using a Metal shader, and on how to detect faces using AVFoundation. You could definitely use these to get you started! If you search for AVCam in the Guides and Sample Code they should be the first two that come up (I would post the links but stack overflow is only letting me add two).

Disable camera shaking in ios

I am creating simple camera app and I want to add 'image stability' so when hands are shaking the camera does not twitch. Is it possible to do in iOS?
You can do this by getting the raw image from the camera, and only using a subset of the raw image frame, then programmatically picking a new subset for each raw image to use for the next frame. Needless to say, this is a large amount of work and should only be undertaken if you know what you are doing or want to have the most impressive video/picture taking app.
The iPhone 6+ has this built into the hardware and is, I believe, what the previous comment link to avfoundation is talking about.

Programmatically determine available iPhone camera resolutions

It looks like when I shoot video with UIImagePickerControllerQualityTypeMedium, on an iPod Touch it comes out 480x360, but on an iPhone 4 it's something higher (can't say just what as I don't have one handy at the moment) and on an iPad 2 presumably the same as the 4, if not something different again.
I'd like to shoot the same quality on all devices -- I have to add some frames and titles, and it'll make my life a lot easier if I just have to code that for one resolution. Is there any way to determine what the different UIImagePickerControllerQualityType values correspond to at run time? (Apart from shooting video with each and then examining the result, that is.)
Or is my only choice to use UIImagePickerControllerQualityType640x480?
If you need more customization/power on iOS than you get wish the higher level objects, such as UIImagePickerController, it is recommended to work at the next lower level: AV Foundation Framework. Apple has some excellent documentation on AV Foundation programming that should come in handy for that purpose.
Unfortunately, even there you are limited to capturing at 640x480 if you do want it standard across all devices. There, however, is a great chart available at the same link (but anchors are broken in the docs, so Ctrl+F to "Capturing Still Images") that lists all the resolutions for various devices under certain quality directives.
Your most solid bet, assuming 640x480 is too small, is to work out some sort of scaling algorithm that would allow you to scale according to the overall resolution.

Resources