Can you activate and deactivate image detection in arkit using Unity UI features? - ios

Is it possible to turn image detection on and off inside an application using UI features from Unity? I want to be able to only use the image detection after selecting a toggle, but still be able to detect planes the entire time. When the toggle is deselected I want to be able to view the image through the camera with nothing happening, but still detect planes on the ground. Is there a good way to do this?

Related

ARcore Augmented images with 3D object interaction

I want to build a digital catelog application
where i detect the image in a catelogue and place a 3D object on it
This can be achieved by ARcore Augmented images.
what i need is When i click/touch the 3D object I need to show some information and videos
For this particular task i need some SDK options
without Vuforia can this be achieved using ARCore+Unity or Android OpenCV or any other.
This requires a lot of work from creating animations and layers to define colliders and controlling with backend code.
First you create the animations and animation controllers, then add colliders to the hot spots where you want to click on the object (e.g. touch the door to open), then map each collider click event to fire a specific animation.
actually it is better to follow a tutorial that shows the animating basics, then it will be easy to combine with AR project,
https://unity3d.com/learn/tutorials/s/animation

Customised focus on IOS camera

Is it possible to create our own focus on IOS camera for our application, so that it may recognise the object when the iOS camera is over it.
For example I have an rectangular area on a sheet of paper and when I put the camera in front of the paper it highlights the focus on the rectangular region, like the Face Identifier.
So, is this possible. If yes then how?
This isn't focus, it's shape recognition. IIRC only face recognition is built in so you need to do this yourself or use a 3rd party solution. Doing it yourself you probably want to use open-cv, 3rd party would be something like metaio.

DeepEnd(cydia tweak) like 3D background for iOS

I am trying to emulate 3D background in one of the application we are developing.
Check this video on what i am trying to do : http://www.youtube.com/watch?v=429kM-yXGz8
Here is what i am trying to do to emulate this 3D illusion in our
application for iPad.
I have a RootView with 3 rounded buttons centered on the screen which animates in circular motion.
On bottom screen i have some banners of size (600*200) which keeps rotating with flip animation.
I also have some graphical text that is part of the background which contains the "Welcome message"
All elements are individual graphics, and hence when the user moves the iPad we only move the background based on the position of iPad using x,y,z coordinates of accelerometer.
the background moves accordingly, however this is not enough to have 3D illusion, so we decided to add some shadows to graphical elements(buttons, banners, text) and move the shadow accordingly with the iPad's position.
However the result is not convincing, and accelerometer is not updating value if user moves iPad to left and right on stand up position facing iPad straight to the head.
I was wondering if anyone have tried to achieve something similar with success? or any resource to help on how to achieve this? i am just confused whether by using only accelerometer will work or should i go with gyroscope?
Using face detection for simulating a 3D effect already has been done (by me). You can download a complete sample from http://evict.nl/code/face-tracking See the video on that page for a quick demo.
You should definitely use both. accelerometer (movement) and gyroscope (device angle). But for a true 3d effect you probably need to use the camera + face detection.

How to know the camera have captured the clear image in the iOS?

I'm doing real-time image analysis which is capturing by camera in iOS(ipad or iphone).
Actually i just want to analysis the image which is clear imaging, if the image is not clear ,i want to discard it.
It's known that the image is not clear when the camera is not focus.
Now my question is :
Whether the iOS can give some indicator the camera have finished focus?
If i move the iPad camera to aim to some object,How i can get the indicator the object image is clear?
In my projcet, i use the AVCaptureDevice to autofocus object. It provide the interface "isAdjustingFocus"
to indicator the camera is focusing. But it is not enough to decide the image is clear.
i find when i move the camera then stop, the iOS have some delay to start focus. So When
i move the ipad ,the image is blurry ,but the indicator is still indicate that it isn't focusing.So i can't get the mode to know the image is clear.
You can use the point spread function to estimate blur, and by extension focus, of the image.

Blur effect in a view of iOS

I want to use an UIImagePicker to have a camera preview being displayed. Over this preview I want to place an overlay view with controls.
Is it possible to apply any effects to the preview which will be displayed from camera? I particularly need to apply a blur effect to the camera preview.
So I want to have a blurred preview from camera and overlay view with controls. If I decide to capture the still image from the camera, I need to have it original without blur effect. So blur effect must applied only to the preview.
Is this possible using such configuration or maybe with AVFoundation being used for accessing the camera preview or maybe somehow else, or that's impossible at all?
With AV foundation you could do almost everything you want since you can obtain single frame from the camera and elaborate them, but it could lead you at a dead-end applying a blur on an image in realtime is a pretty intensive task with laggy video results, that could lead you to waste hours of coding. I would suggest you to use the solution of James WebSster or OpenGL shaders. Take a look at this awesome free library written by one of my favorite guru Brad http://www.sunsetlakesoftware.com/2012/02/12/introducing-gpuimage-framework even if you do not find the right filter, probably it will lead you to a correct implementation of what you want to do.
The right filter is Gaussian blur of course, but I don't know if it is supported, but you could do by yourself.
Almost forgot to say than in iOS 5 you have full access to the Accelerate Framework, made by Apple, you should look also into that.
From the reasonably limited amount of work I've done with UIImagePicker I don't think it is possible to apply the blur to the image you see using programatic filters.
What you might be able to do is to use the overlay to estimate blur. You could do this, for example, by adding an overlay which contains an image of semi-transparent frosted glass.

Resources