Reading on QRCodes on iOS with AVCaptureSession -- alignment issues? - ios

We have implemented a QRCode reading function in iOS using the AVCaptureSession class, as described nicely here:
https://github.com/HEmobile/ScanBarCode/tree/master/ScanBarCodes
https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureSession_Class/
But one thing we notice... the QRCode has to be aligned exactly vertically or horizontally. Oblique angles such as 45 degress does not trigger a scan. This issue doesn't really google, which is surprising.
Our experiments with other QRcode reading apps indicate that this limitation does not exist. Perhaps/seemingly (presumably -- since the built-in function is new) these apps don't use AVCaptureSession.
Our question is, is this a sign that Apple's version of this function is not mature yet? Or is there some option to enable or improve this capability?
Thanks for any thoughts.

It seems you've written some sort of limitation in your code. Check out my github repo: https://github.com/alexekoren/qr-3d
It was built specifically for reading QR Codes at angles in a pretty way. I'm testing it now and it pulls at 30-45 degrees easily.
Here's the direct link for everything you'll need to make the scanner object that can present on a UIView: https://github.com/AlexEKoren/QR-3D/blob/master/Code%20Scanner/Scanner/CSScanner.m
It should work out of the box!

Related

Does the Camera Module for Kivy Allow Custom Framerates?

I'm an artist looking to make an app that utilizes the camera on an Android or Iphone device to display a stereoscopic video feed at 1 to 5 frames per second. Python/Kivy is what I (sort of) know, so I'm starting there. Does the Camera module in Kivy support inputting a custom framerate? The existing documentation doesn't seem to say.
(Also very open to simpler ways to accomplish this/existing applications).
It doesn't directly have a property to set for that, but it should be very easy to achieve. Off the top of my head, you could render the widget in an Fbo and only redraw the Fbo at the rate you require, but there's probably a neater solution.
Probably a bigger problem will be that the Camera support is not that robust, make sure you prototype first to understand what works and what doesn't - or at least what needs more custom work to do what you want.

Apple's AVCamera Night Mode

I've been building a camera app using the AVFoundation and wanted to add NightMode support to it.
Apple features a stunning implementation for this, more about it can be read here: https://www.macrumors.com/guide/night-mode/
Now, the only "properties" I can find about "night mode" would be low-light boost, which only seems to be an iPhone 5 feature
https://developer.apple.com/documentation/avfoundation/avcapturedevice/1624602-islowlightboostenabled
https://forums.developer.apple.com/thread/52574
I'd like to take advantage of the apple native night mode and implement it into my camera app. Is there any way to do so? Is this a feature that might be added to the SDK within the next releases? Did I miss something in the SDK?
Well, some time ago I tried to replicate what Apple calls NightMode.
Of course, there is any reference inside AVFoundation as occurs with HDR or SmartHDR.
The point is, what does NighMode do? How Apple archive this "effect"?
Answering this questions points out why there is anything inside AVFoundation.
They basically blend multiple exposures replicating what in Photography is called "Long Exposure". The only way to do so would be to extract frame per frame from didOutputSampleBuffer and than find a (good) way to blend.
Of course there is a lot more involved, since each frame of the buffer is already pre-worked by isp.

Creating a 360 photo experience on iOS mobile device

I am interested in VR and trying to get a bit more information. I want to create a similar experience on iOS where I can take a 360 image and be able to view it on a iOS device by tilting the phone around and using the devices gyroscope, as I tilt the phone around it will pan around the 360 image (like on google street view where you can use the tilt gesture).
And something similar to this app: http://bubb.li/
Can anybody give a brief overview how this would be do-able, any sources that could help me achieve this, API's etc...?
Much appreciated.
Two options here: You can use a dedicated device to capture the image for you, or you can write some code to stitch together multiple images taken from the iOS device as you move it around a standing point.
I've used the Ricoh Theta for this (no affiliation). They have a 360 viewer in the SDK for mapping 360 images to a sphere that works exactly as you've asked.
Assuming you've figured out how to create 360 photospheres, you can use Unity and Unreal, and probably development platforms to create navigation between the locations you captured.
Here is a tutorial that looks pretty detailed for doing this in Unity:
https://tutorialsforvr.com/creating-virtual-tour-app-in-vr-using-unity/
One pro of doing this in something like Unity or Unreal is once you have navigation between multiple photo spheres working it's fairly easy to add animation or other interactive elements. I've seen interactive stories done with 360 video using this method.
(I see that the question is from a fairly long time ago, but it was one of the top results when I looked for this same topic)

Is is possible to use Vuforia without a camera?

Is it possible to use Vuforia without a camera for image tracking?
Basically I would like a function I could call with an image as a indata parameter and coordinates of a image target as a result. Does that exist?
It is unfortunately not possible. I've been looking for such an option myself several times while working on a Moodstocks (image recognition SDK) / Vuforia mashup (see these 2 blog posts if you are interested in it), but the Vuforia SDK prevents the use of any other source than the camera.
I guess the main reason for this is that the camera management is fully handled internally by the Vuforia SDK, probably in order to make it easier to use as managing the camera by ourselves is at best a boring task (lines and lines of code to repeat in each project...), at worst a huge pain in the ass (especially on Android where there are sometimes devices than don't behave as expected).
By the way, it looks to me like the Vuforia SDK is not the best solution you can find for your use case: it is mainly an augmented-reality SDK, focussed on real-time tracking, which imply working with a camera stream... so using it to do "simple" image recognition looks really overkill!

How to make a screenshot of OpenGl ES on top of the live preview camera in iOS (Augmented Reality app)?

I am a very beginner in Objective-C and iOS programming. I spent a month to find out how to show a 3D model using OpenGL ES (version 1.1) on top of the live camera preview by using AvFoundation. I am doing a kind of augmented reality application on iPad. I process the input frames and show 3D object overlay with the camera preview in realtime. These was fine because there are so many site and tutorial about these things (Thanks to this website as well).
Now, I want to make a screen capture of the whole screen (the model with camera preview as the background) as the image and show in the next screen. I found a really good demonstration here, http://cocoacoderblog.com/2011/03/30/screenshots-a-legal-way-to-get-screenshots/. He did everything I want to do. But, as I said before, I am so beginner and don't understand the whole project without explanation in details. So, I'm stuck for a while because I don't know how to implement this.
Does anybody know any of good tutorial or any kind of source in this topic or any suggestion that I should learn more in order to do this screen capture? This will help me a lot to moving on.
Thank you in advance.
I'm currently attempting to solve this same problem to allow a user to take a screenshot of an Augmented Reality app. (We use Qualcomm's AR SDK plugged into Unity 3D to make our AR apps, which saved me from ever having to learn how to programmatically render OpenGL models)
For my solution I am first looking at implementing the second answer found here: How to take a screenshot programmatically
Barring that I will have to re-engineer the "Combined Screenshots" method found in CocoaCoder's Screenshots app.
I'll check back in when I figure out which one works better.
Here are 3 very helpful links to capture screenshot:
OpenGL ES View Snapshot
How to capture video frames from the camera as images using AV Foundation
How do I take a screenshot of my app that contains both UIKit and Camera elements
Enjoy

Resources